00:00:00.001 Started by upstream project "autotest-per-patch" build number 120480 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.122 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.123 The recommended git tool is: git 00:00:00.123 using credential 00000000-0000-0000-0000-000000000002 00:00:00.124 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.157 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.227 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.239 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.249 Checking out Revision 27f13fcb4eea6a447c9f3d131408acb483141c09 (FETCH_HEAD) 00:00:04.249 > git config core.sparsecheckout # timeout=10 00:00:04.261 > git read-tree -mu HEAD # timeout=10 00:00:04.277 > git checkout -f 27f13fcb4eea6a447c9f3d131408acb483141c09 # timeout=5 00:00:04.294 Commit message: "docker/pdu_power: add PDU APC-C14 and APC-C18" 00:00:04.294 > git rev-list --no-walk 27f13fcb4eea6a447c9f3d131408acb483141c09 # timeout=10 00:00:04.376 [Pipeline] Start of Pipeline 00:00:04.389 [Pipeline] library 00:00:04.390 Loading library shm_lib@master 00:00:04.390 Library shm_lib@master is cached. Copying from home. 00:00:04.408 [Pipeline] node 00:00:19.410 Still waiting to schedule task 00:00:19.410 Waiting for next available executor on ‘vagrant-vm-host’ 00:07:54.531 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:54.533 [Pipeline] { 00:07:54.546 [Pipeline] catchError 00:07:54.548 [Pipeline] { 00:07:54.566 [Pipeline] wrap 00:07:54.578 [Pipeline] { 00:07:54.587 [Pipeline] stage 00:07:54.589 [Pipeline] { (Prologue) 00:07:54.612 [Pipeline] echo 00:07:54.613 Node: VM-host-SM9 00:07:54.620 [Pipeline] cleanWs 00:07:54.629 [WS-CLEANUP] Deleting project workspace... 00:07:54.629 [WS-CLEANUP] Deferred wipeout is used... 00:07:54.635 [WS-CLEANUP] done 00:07:54.803 [Pipeline] setCustomBuildProperty 00:07:54.882 [Pipeline] nodesByLabel 00:07:54.883 Found a total of 1 nodes with the 'sorcerer' label 00:07:54.893 [Pipeline] httpRequest 00:07:54.897 HttpMethod: GET 00:07:54.898 URL: http://10.211.164.101/packages/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:07:54.899 Sending request to url: http://10.211.164.101/packages/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:07:54.903 Response Code: HTTP/1.1 200 OK 00:07:54.903 Success: Status code 200 is in the accepted range: 200,404 00:07:54.904 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:07:55.262 [Pipeline] sh 00:07:55.535 + tar --no-same-owner -xf jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:07:55.556 [Pipeline] httpRequest 00:07:55.561 HttpMethod: GET 00:07:55.561 URL: http://10.211.164.101/packages/spdk_0fa934e8f41d43921e51160cbf7229a1d6eece47.tar.gz 00:07:55.562 Sending request to url: http://10.211.164.101/packages/spdk_0fa934e8f41d43921e51160cbf7229a1d6eece47.tar.gz 00:07:55.563 Response Code: HTTP/1.1 200 OK 00:07:55.563 Success: Status code 200 is in the accepted range: 200,404 00:07:55.564 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_0fa934e8f41d43921e51160cbf7229a1d6eece47.tar.gz 00:07:57.786 [Pipeline] sh 00:07:58.063 + tar --no-same-owner -xf spdk_0fa934e8f41d43921e51160cbf7229a1d6eece47.tar.gz 00:08:01.475 [Pipeline] sh 00:08:01.755 + git -C spdk log --oneline -n5 00:08:01.755 0fa934e8f raid: add callback to raid_bdev_examine_sb() 00:08:01.755 115be10bf test/raid: always create pt bdevs in rebuild test 00:08:01.755 318c184cf test/raid: remove unnecessary recreating of base bdevs 00:08:01.755 23e5871e3 raid: allow re-adding base bdev when in CONFIGURING state 00:08:01.755 1f4493e34 raid: limit the no superblock examine case 00:08:01.775 [Pipeline] writeFile 00:08:01.790 [Pipeline] sh 00:08:02.088 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:08:02.100 [Pipeline] sh 00:08:02.379 + cat autorun-spdk.conf 00:08:02.379 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:02.379 SPDK_TEST_NVMF=1 00:08:02.379 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:02.379 SPDK_TEST_URING=1 00:08:02.379 SPDK_TEST_USDT=1 00:08:02.379 SPDK_RUN_UBSAN=1 00:08:02.379 NET_TYPE=virt 00:08:02.379 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:02.386 RUN_NIGHTLY=0 00:08:02.389 [Pipeline] } 00:08:02.406 [Pipeline] // stage 00:08:02.421 [Pipeline] stage 00:08:02.423 [Pipeline] { (Run VM) 00:08:02.437 [Pipeline] sh 00:08:02.716 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:08:02.716 + echo 'Start stage prepare_nvme.sh' 00:08:02.716 Start stage prepare_nvme.sh 00:08:02.716 + [[ -n 1 ]] 00:08:02.716 + disk_prefix=ex1 00:08:02.716 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:08:02.716 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:08:02.716 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:08:02.716 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:02.716 ++ SPDK_TEST_NVMF=1 00:08:02.716 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:02.716 ++ SPDK_TEST_URING=1 00:08:02.716 ++ SPDK_TEST_USDT=1 00:08:02.716 ++ SPDK_RUN_UBSAN=1 00:08:02.716 ++ NET_TYPE=virt 00:08:02.716 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:02.716 ++ RUN_NIGHTLY=0 00:08:02.716 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:08:02.716 + nvme_files=() 00:08:02.716 + declare -A nvme_files 00:08:02.716 + backend_dir=/var/lib/libvirt/images/backends 00:08:02.716 + nvme_files['nvme.img']=5G 00:08:02.716 + nvme_files['nvme-cmb.img']=5G 00:08:02.716 + nvme_files['nvme-multi0.img']=4G 00:08:02.716 + nvme_files['nvme-multi1.img']=4G 00:08:02.716 + nvme_files['nvme-multi2.img']=4G 00:08:02.716 + nvme_files['nvme-openstack.img']=8G 00:08:02.716 + nvme_files['nvme-zns.img']=5G 00:08:02.716 + (( SPDK_TEST_NVME_PMR == 1 )) 00:08:02.716 + (( SPDK_TEST_FTL == 1 )) 00:08:02.716 + (( SPDK_TEST_NVME_FDP == 1 )) 00:08:02.716 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:08:02.716 + for nvme in "${!nvme_files[@]}" 00:08:02.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:08:02.716 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:08:02.716 + for nvme in "${!nvme_files[@]}" 00:08:02.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:08:02.716 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:08:02.716 + for nvme in "${!nvme_files[@]}" 00:08:02.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:08:02.716 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:08:02.716 + for nvme in "${!nvme_files[@]}" 00:08:02.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:08:02.717 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:08:02.717 + for nvme in "${!nvme_files[@]}" 00:08:02.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:08:02.717 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:08:02.717 + for nvme in "${!nvme_files[@]}" 00:08:02.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:08:02.717 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:08:02.717 + for nvme in "${!nvme_files[@]}" 00:08:02.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:08:03.651 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:08:03.651 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:08:03.651 + echo 'End stage prepare_nvme.sh' 00:08:03.651 End stage prepare_nvme.sh 00:08:03.663 [Pipeline] sh 00:08:03.943 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:08:03.943 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:08:03.943 00:08:03.943 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:08:03.943 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:08:03.943 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:08:03.943 HELP=0 00:08:03.943 DRY_RUN=0 00:08:03.943 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:08:03.943 NVME_DISKS_TYPE=nvme,nvme, 00:08:03.943 NVME_AUTO_CREATE=0 00:08:03.943 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:08:03.943 NVME_CMB=,, 00:08:03.943 NVME_PMR=,, 00:08:03.943 NVME_ZNS=,, 00:08:03.943 NVME_MS=,, 00:08:03.943 NVME_FDP=,, 00:08:03.943 SPDK_VAGRANT_DISTRO=fedora38 00:08:03.943 SPDK_VAGRANT_VMCPU=10 00:08:03.943 SPDK_VAGRANT_VMRAM=12288 00:08:03.943 SPDK_VAGRANT_PROVIDER=libvirt 00:08:03.943 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:08:03.943 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:08:03.943 SPDK_OPENSTACK_NETWORK=0 00:08:03.943 VAGRANT_PACKAGE_BOX=0 00:08:03.943 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:08:03.943 FORCE_DISTRO=true 00:08:03.943 VAGRANT_BOX_VERSION= 00:08:03.943 EXTRA_VAGRANTFILES= 00:08:03.943 NIC_MODEL=e1000 00:08:03.943 00:08:03.943 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:08:03.943 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:08:07.329 Bringing machine 'default' up with 'libvirt' provider... 00:08:07.895 ==> default: Creating image (snapshot of base box volume). 00:08:07.895 ==> default: Creating domain with the following settings... 00:08:07.895 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713364036_9b9f2e5717865bd32aa0 00:08:07.895 ==> default: -- Domain type: kvm 00:08:07.895 ==> default: -- Cpus: 10 00:08:07.895 ==> default: -- Feature: acpi 00:08:07.895 ==> default: -- Feature: apic 00:08:07.895 ==> default: -- Feature: pae 00:08:07.895 ==> default: -- Memory: 12288M 00:08:07.895 ==> default: -- Memory Backing: hugepages: 00:08:07.895 ==> default: -- Management MAC: 00:08:07.895 ==> default: -- Loader: 00:08:07.895 ==> default: -- Nvram: 00:08:07.895 ==> default: -- Base box: spdk/fedora38 00:08:07.895 ==> default: -- Storage pool: default 00:08:07.895 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713364036_9b9f2e5717865bd32aa0.img (20G) 00:08:07.895 ==> default: -- Volume Cache: default 00:08:07.895 ==> default: -- Kernel: 00:08:07.895 ==> default: -- Initrd: 00:08:07.895 ==> default: -- Graphics Type: vnc 00:08:07.895 ==> default: -- Graphics Port: -1 00:08:07.895 ==> default: -- Graphics IP: 127.0.0.1 00:08:07.895 ==> default: -- Graphics Password: Not defined 00:08:07.895 ==> default: -- Video Type: cirrus 00:08:07.895 ==> default: -- Video VRAM: 9216 00:08:07.895 ==> default: -- Sound Type: 00:08:07.895 ==> default: -- Keymap: en-us 00:08:07.895 ==> default: -- TPM Path: 00:08:07.895 ==> default: -- INPUT: type=mouse, bus=ps2 00:08:07.895 ==> default: -- Command line args: 00:08:07.895 ==> default: -> value=-device, 00:08:07.895 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:08:07.895 ==> default: -> value=-drive, 00:08:07.895 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:08:07.895 ==> default: -> value=-device, 00:08:07.895 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:07.895 ==> default: -> value=-device, 00:08:07.895 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:08:07.895 ==> default: -> value=-drive, 00:08:07.895 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:08:07.895 ==> default: -> value=-device, 00:08:07.895 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:07.895 ==> default: -> value=-drive, 00:08:07.895 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:08:07.895 ==> default: -> value=-device, 00:08:07.895 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:07.895 ==> default: -> value=-drive, 00:08:07.895 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:08:07.895 ==> default: -> value=-device, 00:08:07.895 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:08.153 ==> default: Creating shared folders metadata... 00:08:08.153 ==> default: Starting domain. 00:08:09.534 ==> default: Waiting for domain to get an IP address... 00:08:27.619 ==> default: Waiting for SSH to become available... 00:08:28.996 ==> default: Configuring and enabling network interfaces... 00:08:33.204 default: SSH address: 192.168.121.213:22 00:08:33.204 default: SSH username: vagrant 00:08:33.204 default: SSH auth method: private key 00:08:35.120 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:08:43.230 ==> default: Mounting SSHFS shared folder... 00:08:43.798 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:08:43.798 ==> default: Checking Mount.. 00:08:44.733 ==> default: Folder Successfully Mounted! 00:08:44.733 ==> default: Running provisioner: file... 00:08:45.668 default: ~/.gitconfig => .gitconfig 00:08:45.925 00:08:45.925 SUCCESS! 00:08:45.925 00:08:45.925 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:08:45.925 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:45.925 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:08:45.925 00:08:45.935 [Pipeline] } 00:08:45.950 [Pipeline] // stage 00:08:45.958 [Pipeline] dir 00:08:45.958 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:08:45.960 [Pipeline] { 00:08:45.973 [Pipeline] catchError 00:08:45.975 [Pipeline] { 00:08:45.989 [Pipeline] sh 00:08:46.265 + vagrant ssh-config --host vagrant 00:08:46.265 + sed -ne /^Host/,$p 00:08:46.265 + tee ssh_conf 00:08:50.517 Host vagrant 00:08:50.517 HostName 192.168.121.213 00:08:50.517 User vagrant 00:08:50.517 Port 22 00:08:50.517 UserKnownHostsFile /dev/null 00:08:50.518 StrictHostKeyChecking no 00:08:50.518 PasswordAuthentication no 00:08:50.518 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:08:50.518 IdentitiesOnly yes 00:08:50.518 LogLevel FATAL 00:08:50.518 ForwardAgent yes 00:08:50.518 ForwardX11 yes 00:08:50.518 00:08:50.530 [Pipeline] withEnv 00:08:50.532 [Pipeline] { 00:08:50.544 [Pipeline] sh 00:08:50.818 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:50.818 source /etc/os-release 00:08:50.818 [[ -e /image.version ]] && img=$(< /image.version) 00:08:50.818 # Minimal, systemd-like check. 00:08:50.818 if [[ -e /.dockerenv ]]; then 00:08:50.818 # Clear garbage from the node's name: 00:08:50.818 # agt-er_autotest_547-896 -> autotest_547-896 00:08:50.818 # $HOSTNAME is the actual container id 00:08:50.818 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:50.818 if mountpoint -q /etc/hostname; then 00:08:50.818 # We can assume this is a mount from a host where container is running, 00:08:50.818 # so fetch its hostname to easily identify the target swarm worker. 00:08:50.818 container="$(< /etc/hostname) ($agent)" 00:08:50.818 else 00:08:50.818 # Fallback 00:08:50.818 container=$agent 00:08:50.818 fi 00:08:50.818 fi 00:08:50.818 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:50.818 00:08:51.088 [Pipeline] } 00:08:51.107 [Pipeline] // withEnv 00:08:51.115 [Pipeline] setCustomBuildProperty 00:08:51.130 [Pipeline] stage 00:08:51.131 [Pipeline] { (Tests) 00:08:51.150 [Pipeline] sh 00:08:51.466 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:51.489 [Pipeline] timeout 00:08:51.490 Timeout set to expire in 30 min 00:08:51.491 [Pipeline] { 00:08:51.505 [Pipeline] sh 00:08:51.792 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:52.358 HEAD is now at 0fa934e8f raid: add callback to raid_bdev_examine_sb() 00:08:52.371 [Pipeline] sh 00:08:52.650 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:52.922 [Pipeline] sh 00:08:53.201 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:53.472 [Pipeline] sh 00:08:53.750 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:08:54.009 ++ readlink -f spdk_repo 00:08:54.009 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:54.009 + [[ -n /home/vagrant/spdk_repo ]] 00:08:54.009 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:54.009 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:54.009 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:54.009 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:54.009 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:54.009 + cd /home/vagrant/spdk_repo 00:08:54.009 + source /etc/os-release 00:08:54.009 ++ NAME='Fedora Linux' 00:08:54.009 ++ VERSION='38 (Cloud Edition)' 00:08:54.009 ++ ID=fedora 00:08:54.009 ++ VERSION_ID=38 00:08:54.009 ++ VERSION_CODENAME= 00:08:54.009 ++ PLATFORM_ID=platform:f38 00:08:54.009 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:08:54.009 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:54.009 ++ LOGO=fedora-logo-icon 00:08:54.009 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:08:54.009 ++ HOME_URL=https://fedoraproject.org/ 00:08:54.009 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:08:54.009 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:54.009 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:54.009 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:54.009 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:08:54.009 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:54.009 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:08:54.009 ++ SUPPORT_END=2024-05-14 00:08:54.009 ++ VARIANT='Cloud Edition' 00:08:54.009 ++ VARIANT_ID=cloud 00:08:54.009 + uname -a 00:08:54.009 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:08:54.009 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:54.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.574 Hugepages 00:08:54.574 node hugesize free / total 00:08:54.574 node0 1048576kB 0 / 0 00:08:54.574 node0 2048kB 0 / 0 00:08:54.574 00:08:54.574 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:54.574 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:54.574 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:54.574 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:54.574 + rm -f /tmp/spdk-ld-path 00:08:54.574 + source autorun-spdk.conf 00:08:54.574 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:54.574 ++ SPDK_TEST_NVMF=1 00:08:54.574 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:54.574 ++ SPDK_TEST_URING=1 00:08:54.574 ++ SPDK_TEST_USDT=1 00:08:54.574 ++ SPDK_RUN_UBSAN=1 00:08:54.574 ++ NET_TYPE=virt 00:08:54.574 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:54.574 ++ RUN_NIGHTLY=0 00:08:54.574 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:54.574 + [[ -n '' ]] 00:08:54.574 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:54.574 + for M in /var/spdk/build-*-manifest.txt 00:08:54.574 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:54.574 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:54.574 + for M in /var/spdk/build-*-manifest.txt 00:08:54.574 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:54.574 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:54.574 ++ uname 00:08:54.574 + [[ Linux == \L\i\n\u\x ]] 00:08:54.574 + sudo dmesg -T 00:08:54.574 + sudo dmesg --clear 00:08:54.574 + dmesg_pid=5177 00:08:54.574 + [[ Fedora Linux == FreeBSD ]] 00:08:54.574 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:54.574 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:54.574 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:54.574 + sudo dmesg -Tw 00:08:54.574 + [[ -x /usr/src/fio-static/fio ]] 00:08:54.574 + export FIO_BIN=/usr/src/fio-static/fio 00:08:54.574 + FIO_BIN=/usr/src/fio-static/fio 00:08:54.574 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:54.574 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:54.574 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:54.574 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:54.574 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:54.574 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:54.574 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:54.574 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:54.574 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:54.574 Test configuration: 00:08:54.574 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:54.574 SPDK_TEST_NVMF=1 00:08:54.574 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:54.574 SPDK_TEST_URING=1 00:08:54.574 SPDK_TEST_USDT=1 00:08:54.574 SPDK_RUN_UBSAN=1 00:08:54.574 NET_TYPE=virt 00:08:54.574 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:54.833 RUN_NIGHTLY=0 14:28:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.833 14:28:03 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:54.833 14:28:03 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.833 14:28:03 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.833 14:28:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.833 14:28:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.833 14:28:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.833 14:28:03 -- paths/export.sh@5 -- $ export PATH 00:08:54.833 14:28:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.833 14:28:03 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:54.833 14:28:03 -- common/autobuild_common.sh@435 -- $ date +%s 00:08:54.833 14:28:03 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713364083.XXXXXX 00:08:54.833 14:28:03 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713364083.8530Rp 00:08:54.833 14:28:03 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:08:54.833 14:28:03 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:08:54.833 14:28:03 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:08:54.833 14:28:03 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:54.833 14:28:03 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:54.833 14:28:03 -- common/autobuild_common.sh@451 -- $ get_config_params 00:08:54.833 14:28:03 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:08:54.833 14:28:03 -- common/autotest_common.sh@10 -- $ set +x 00:08:54.833 14:28:03 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:08:54.833 14:28:03 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:08:54.833 14:28:03 -- pm/common@17 -- $ local monitor 00:08:54.833 14:28:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:54.833 14:28:03 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5211 00:08:54.833 14:28:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:54.833 14:28:03 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5213 00:08:54.833 14:28:03 -- pm/common@21 -- $ date +%s 00:08:54.833 14:28:03 -- pm/common@26 -- $ sleep 1 00:08:54.833 14:28:03 -- pm/common@21 -- $ date +%s 00:08:54.833 14:28:03 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713364083 00:08:54.833 14:28:03 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713364083 00:08:54.833 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713364083_collect-vmstat.pm.log 00:08:54.833 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713364083_collect-cpu-load.pm.log 00:08:55.769 14:28:04 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:08:55.769 14:28:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:55.769 14:28:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:55.769 14:28:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:55.769 14:28:04 -- spdk/autobuild.sh@16 -- $ date -u 00:08:55.769 Wed Apr 17 02:28:04 PM UTC 2024 00:08:55.769 14:28:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:55.769 v24.05-pre-392-g0fa934e8f 00:08:55.769 14:28:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:08:55.769 14:28:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:55.769 14:28:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:55.769 14:28:04 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:08:55.769 14:28:04 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:08:55.769 14:28:04 -- common/autotest_common.sh@10 -- $ set +x 00:08:55.769 ************************************ 00:08:55.769 START TEST ubsan 00:08:55.769 ************************************ 00:08:55.769 using ubsan 00:08:55.769 14:28:04 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:08:55.769 00:08:55.769 real 0m0.000s 00:08:55.769 user 0m0.000s 00:08:55.769 sys 0m0.000s 00:08:55.769 14:28:04 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:08:55.769 ************************************ 00:08:55.769 14:28:04 -- common/autotest_common.sh@10 -- $ set +x 00:08:55.769 END TEST ubsan 00:08:55.769 ************************************ 00:08:56.028 14:28:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:56.028 14:28:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:56.028 14:28:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:56.028 14:28:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:56.028 14:28:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:56.028 14:28:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:56.028 14:28:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:56.028 14:28:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:56.028 14:28:04 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:08:56.028 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:56.028 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:56.286 Using 'verbs' RDMA provider 00:09:09.450 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:24.331 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:24.331 Creating mk/config.mk...done. 00:09:24.331 Creating mk/cc.flags.mk...done. 00:09:24.331 Type 'make' to build. 00:09:24.331 14:28:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:09:24.331 14:28:30 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:09:24.331 14:28:30 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:09:24.331 14:28:30 -- common/autotest_common.sh@10 -- $ set +x 00:09:24.331 ************************************ 00:09:24.331 START TEST make 00:09:24.332 ************************************ 00:09:24.332 14:28:30 -- common/autotest_common.sh@1111 -- $ make -j10 00:09:24.332 make[1]: Nothing to be done for 'all'. 00:09:34.302 The Meson build system 00:09:34.302 Version: 1.3.1 00:09:34.302 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:09:34.302 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:09:34.302 Build type: native build 00:09:34.302 Program cat found: YES (/usr/bin/cat) 00:09:34.302 Project name: DPDK 00:09:34.302 Project version: 23.11.0 00:09:34.302 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:34.302 C linker for the host machine: cc ld.bfd 2.39-16 00:09:34.302 Host machine cpu family: x86_64 00:09:34.302 Host machine cpu: x86_64 00:09:34.302 Message: ## Building in Developer Mode ## 00:09:34.302 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:34.302 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:09:34.302 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:09:34.302 Program python3 found: YES (/usr/bin/python3) 00:09:34.302 Program cat found: YES (/usr/bin/cat) 00:09:34.302 Compiler for C supports arguments -march=native: YES 00:09:34.302 Checking for size of "void *" : 8 00:09:34.302 Checking for size of "void *" : 8 (cached) 00:09:34.302 Library m found: YES 00:09:34.302 Library numa found: YES 00:09:34.302 Has header "numaif.h" : YES 00:09:34.302 Library fdt found: NO 00:09:34.302 Library execinfo found: NO 00:09:34.302 Has header "execinfo.h" : YES 00:09:34.303 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:34.303 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:34.303 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:34.303 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:34.303 Run-time dependency openssl found: YES 3.0.9 00:09:34.303 Run-time dependency libpcap found: YES 1.10.4 00:09:34.303 Has header "pcap.h" with dependency libpcap: YES 00:09:34.303 Compiler for C supports arguments -Wcast-qual: YES 00:09:34.303 Compiler for C supports arguments -Wdeprecated: YES 00:09:34.303 Compiler for C supports arguments -Wformat: YES 00:09:34.303 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:34.303 Compiler for C supports arguments -Wformat-security: NO 00:09:34.303 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:34.303 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:34.303 Compiler for C supports arguments -Wnested-externs: YES 00:09:34.303 Compiler for C supports arguments -Wold-style-definition: YES 00:09:34.303 Compiler for C supports arguments -Wpointer-arith: YES 00:09:34.303 Compiler for C supports arguments -Wsign-compare: YES 00:09:34.303 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:34.303 Compiler for C supports arguments -Wundef: YES 00:09:34.303 Compiler for C supports arguments -Wwrite-strings: YES 00:09:34.303 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:34.303 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:34.303 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:34.303 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:34.303 Program objdump found: YES (/usr/bin/objdump) 00:09:34.303 Compiler for C supports arguments -mavx512f: YES 00:09:34.303 Checking if "AVX512 checking" compiles: YES 00:09:34.303 Fetching value of define "__SSE4_2__" : 1 00:09:34.303 Fetching value of define "__AES__" : 1 00:09:34.303 Fetching value of define "__AVX__" : 1 00:09:34.303 Fetching value of define "__AVX2__" : 1 00:09:34.303 Fetching value of define "__AVX512BW__" : (undefined) 00:09:34.303 Fetching value of define "__AVX512CD__" : (undefined) 00:09:34.303 Fetching value of define "__AVX512DQ__" : (undefined) 00:09:34.303 Fetching value of define "__AVX512F__" : (undefined) 00:09:34.303 Fetching value of define "__AVX512VL__" : (undefined) 00:09:34.303 Fetching value of define "__PCLMUL__" : 1 00:09:34.303 Fetching value of define "__RDRND__" : 1 00:09:34.303 Fetching value of define "__RDSEED__" : 1 00:09:34.303 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:34.303 Fetching value of define "__znver1__" : (undefined) 00:09:34.303 Fetching value of define "__znver2__" : (undefined) 00:09:34.303 Fetching value of define "__znver3__" : (undefined) 00:09:34.303 Fetching value of define "__znver4__" : (undefined) 00:09:34.303 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:34.303 Message: lib/log: Defining dependency "log" 00:09:34.303 Message: lib/kvargs: Defining dependency "kvargs" 00:09:34.303 Message: lib/telemetry: Defining dependency "telemetry" 00:09:34.303 Checking for function "getentropy" : NO 00:09:34.303 Message: lib/eal: Defining dependency "eal" 00:09:34.303 Message: lib/ring: Defining dependency "ring" 00:09:34.303 Message: lib/rcu: Defining dependency "rcu" 00:09:34.303 Message: lib/mempool: Defining dependency "mempool" 00:09:34.303 Message: lib/mbuf: Defining dependency "mbuf" 00:09:34.303 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:34.303 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:09:34.303 Compiler for C supports arguments -mpclmul: YES 00:09:34.303 Compiler for C supports arguments -maes: YES 00:09:34.303 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:34.303 Compiler for C supports arguments -mavx512bw: YES 00:09:34.303 Compiler for C supports arguments -mavx512dq: YES 00:09:34.303 Compiler for C supports arguments -mavx512vl: YES 00:09:34.303 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:34.303 Compiler for C supports arguments -mavx2: YES 00:09:34.303 Compiler for C supports arguments -mavx: YES 00:09:34.303 Message: lib/net: Defining dependency "net" 00:09:34.303 Message: lib/meter: Defining dependency "meter" 00:09:34.303 Message: lib/ethdev: Defining dependency "ethdev" 00:09:34.303 Message: lib/pci: Defining dependency "pci" 00:09:34.303 Message: lib/cmdline: Defining dependency "cmdline" 00:09:34.303 Message: lib/hash: Defining dependency "hash" 00:09:34.303 Message: lib/timer: Defining dependency "timer" 00:09:34.303 Message: lib/compressdev: Defining dependency "compressdev" 00:09:34.303 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:34.303 Message: lib/dmadev: Defining dependency "dmadev" 00:09:34.303 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:34.303 Message: lib/power: Defining dependency "power" 00:09:34.303 Message: lib/reorder: Defining dependency "reorder" 00:09:34.303 Message: lib/security: Defining dependency "security" 00:09:34.303 Has header "linux/userfaultfd.h" : YES 00:09:34.303 Has header "linux/vduse.h" : YES 00:09:34.303 Message: lib/vhost: Defining dependency "vhost" 00:09:34.303 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:34.303 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:34.303 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:34.303 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:34.303 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:09:34.303 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:09:34.303 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:09:34.303 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:09:34.303 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:09:34.303 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:09:34.303 Program doxygen found: YES (/usr/bin/doxygen) 00:09:34.303 Configuring doxy-api-html.conf using configuration 00:09:34.303 Configuring doxy-api-man.conf using configuration 00:09:34.303 Program mandb found: YES (/usr/bin/mandb) 00:09:34.303 Program sphinx-build found: NO 00:09:34.303 Configuring rte_build_config.h using configuration 00:09:34.303 Message: 00:09:34.303 ================= 00:09:34.303 Applications Enabled 00:09:34.303 ================= 00:09:34.303 00:09:34.303 apps: 00:09:34.303 00:09:34.303 00:09:34.303 Message: 00:09:34.303 ================= 00:09:34.303 Libraries Enabled 00:09:34.303 ================= 00:09:34.303 00:09:34.303 libs: 00:09:34.303 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:34.303 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:09:34.303 cryptodev, dmadev, power, reorder, security, vhost, 00:09:34.303 00:09:34.303 Message: 00:09:34.303 =============== 00:09:34.303 Drivers Enabled 00:09:34.303 =============== 00:09:34.303 00:09:34.303 common: 00:09:34.303 00:09:34.303 bus: 00:09:34.303 pci, vdev, 00:09:34.303 mempool: 00:09:34.303 ring, 00:09:34.303 dma: 00:09:34.303 00:09:34.303 net: 00:09:34.303 00:09:34.303 crypto: 00:09:34.303 00:09:34.303 compress: 00:09:34.303 00:09:34.303 vdpa: 00:09:34.303 00:09:34.303 00:09:34.303 Message: 00:09:34.303 ================= 00:09:34.303 Content Skipped 00:09:34.303 ================= 00:09:34.303 00:09:34.303 apps: 00:09:34.303 dumpcap: explicitly disabled via build config 00:09:34.303 graph: explicitly disabled via build config 00:09:34.303 pdump: explicitly disabled via build config 00:09:34.303 proc-info: explicitly disabled via build config 00:09:34.303 test-acl: explicitly disabled via build config 00:09:34.303 test-bbdev: explicitly disabled via build config 00:09:34.303 test-cmdline: explicitly disabled via build config 00:09:34.303 test-compress-perf: explicitly disabled via build config 00:09:34.303 test-crypto-perf: explicitly disabled via build config 00:09:34.303 test-dma-perf: explicitly disabled via build config 00:09:34.303 test-eventdev: explicitly disabled via build config 00:09:34.303 test-fib: explicitly disabled via build config 00:09:34.303 test-flow-perf: explicitly disabled via build config 00:09:34.303 test-gpudev: explicitly disabled via build config 00:09:34.303 test-mldev: explicitly disabled via build config 00:09:34.303 test-pipeline: explicitly disabled via build config 00:09:34.303 test-pmd: explicitly disabled via build config 00:09:34.303 test-regex: explicitly disabled via build config 00:09:34.303 test-sad: explicitly disabled via build config 00:09:34.303 test-security-perf: explicitly disabled via build config 00:09:34.303 00:09:34.303 libs: 00:09:34.303 metrics: explicitly disabled via build config 00:09:34.303 acl: explicitly disabled via build config 00:09:34.303 bbdev: explicitly disabled via build config 00:09:34.303 bitratestats: explicitly disabled via build config 00:09:34.303 bpf: explicitly disabled via build config 00:09:34.303 cfgfile: explicitly disabled via build config 00:09:34.303 distributor: explicitly disabled via build config 00:09:34.303 efd: explicitly disabled via build config 00:09:34.303 eventdev: explicitly disabled via build config 00:09:34.303 dispatcher: explicitly disabled via build config 00:09:34.303 gpudev: explicitly disabled via build config 00:09:34.303 gro: explicitly disabled via build config 00:09:34.303 gso: explicitly disabled via build config 00:09:34.303 ip_frag: explicitly disabled via build config 00:09:34.303 jobstats: explicitly disabled via build config 00:09:34.303 latencystats: explicitly disabled via build config 00:09:34.303 lpm: explicitly disabled via build config 00:09:34.303 member: explicitly disabled via build config 00:09:34.303 pcapng: explicitly disabled via build config 00:09:34.303 rawdev: explicitly disabled via build config 00:09:34.303 regexdev: explicitly disabled via build config 00:09:34.303 mldev: explicitly disabled via build config 00:09:34.303 rib: explicitly disabled via build config 00:09:34.303 sched: explicitly disabled via build config 00:09:34.303 stack: explicitly disabled via build config 00:09:34.303 ipsec: explicitly disabled via build config 00:09:34.303 pdcp: explicitly disabled via build config 00:09:34.303 fib: explicitly disabled via build config 00:09:34.303 port: explicitly disabled via build config 00:09:34.303 pdump: explicitly disabled via build config 00:09:34.303 table: explicitly disabled via build config 00:09:34.304 pipeline: explicitly disabled via build config 00:09:34.304 graph: explicitly disabled via build config 00:09:34.304 node: explicitly disabled via build config 00:09:34.304 00:09:34.304 drivers: 00:09:34.304 common/cpt: not in enabled drivers build config 00:09:34.304 common/dpaax: not in enabled drivers build config 00:09:34.304 common/iavf: not in enabled drivers build config 00:09:34.304 common/idpf: not in enabled drivers build config 00:09:34.304 common/mvep: not in enabled drivers build config 00:09:34.304 common/octeontx: not in enabled drivers build config 00:09:34.304 bus/auxiliary: not in enabled drivers build config 00:09:34.304 bus/cdx: not in enabled drivers build config 00:09:34.304 bus/dpaa: not in enabled drivers build config 00:09:34.304 bus/fslmc: not in enabled drivers build config 00:09:34.304 bus/ifpga: not in enabled drivers build config 00:09:34.304 bus/platform: not in enabled drivers build config 00:09:34.304 bus/vmbus: not in enabled drivers build config 00:09:34.304 common/cnxk: not in enabled drivers build config 00:09:34.304 common/mlx5: not in enabled drivers build config 00:09:34.304 common/nfp: not in enabled drivers build config 00:09:34.304 common/qat: not in enabled drivers build config 00:09:34.304 common/sfc_efx: not in enabled drivers build config 00:09:34.304 mempool/bucket: not in enabled drivers build config 00:09:34.304 mempool/cnxk: not in enabled drivers build config 00:09:34.304 mempool/dpaa: not in enabled drivers build config 00:09:34.304 mempool/dpaa2: not in enabled drivers build config 00:09:34.304 mempool/octeontx: not in enabled drivers build config 00:09:34.304 mempool/stack: not in enabled drivers build config 00:09:34.304 dma/cnxk: not in enabled drivers build config 00:09:34.304 dma/dpaa: not in enabled drivers build config 00:09:34.304 dma/dpaa2: not in enabled drivers build config 00:09:34.304 dma/hisilicon: not in enabled drivers build config 00:09:34.304 dma/idxd: not in enabled drivers build config 00:09:34.304 dma/ioat: not in enabled drivers build config 00:09:34.304 dma/skeleton: not in enabled drivers build config 00:09:34.304 net/af_packet: not in enabled drivers build config 00:09:34.304 net/af_xdp: not in enabled drivers build config 00:09:34.304 net/ark: not in enabled drivers build config 00:09:34.304 net/atlantic: not in enabled drivers build config 00:09:34.304 net/avp: not in enabled drivers build config 00:09:34.304 net/axgbe: not in enabled drivers build config 00:09:34.304 net/bnx2x: not in enabled drivers build config 00:09:34.304 net/bnxt: not in enabled drivers build config 00:09:34.304 net/bonding: not in enabled drivers build config 00:09:34.304 net/cnxk: not in enabled drivers build config 00:09:34.304 net/cpfl: not in enabled drivers build config 00:09:34.304 net/cxgbe: not in enabled drivers build config 00:09:34.304 net/dpaa: not in enabled drivers build config 00:09:34.304 net/dpaa2: not in enabled drivers build config 00:09:34.304 net/e1000: not in enabled drivers build config 00:09:34.304 net/ena: not in enabled drivers build config 00:09:34.304 net/enetc: not in enabled drivers build config 00:09:34.304 net/enetfec: not in enabled drivers build config 00:09:34.304 net/enic: not in enabled drivers build config 00:09:34.304 net/failsafe: not in enabled drivers build config 00:09:34.304 net/fm10k: not in enabled drivers build config 00:09:34.304 net/gve: not in enabled drivers build config 00:09:34.304 net/hinic: not in enabled drivers build config 00:09:34.304 net/hns3: not in enabled drivers build config 00:09:34.304 net/i40e: not in enabled drivers build config 00:09:34.304 net/iavf: not in enabled drivers build config 00:09:34.304 net/ice: not in enabled drivers build config 00:09:34.304 net/idpf: not in enabled drivers build config 00:09:34.304 net/igc: not in enabled drivers build config 00:09:34.304 net/ionic: not in enabled drivers build config 00:09:34.304 net/ipn3ke: not in enabled drivers build config 00:09:34.304 net/ixgbe: not in enabled drivers build config 00:09:34.304 net/mana: not in enabled drivers build config 00:09:34.304 net/memif: not in enabled drivers build config 00:09:34.304 net/mlx4: not in enabled drivers build config 00:09:34.304 net/mlx5: not in enabled drivers build config 00:09:34.304 net/mvneta: not in enabled drivers build config 00:09:34.304 net/mvpp2: not in enabled drivers build config 00:09:34.304 net/netvsc: not in enabled drivers build config 00:09:34.304 net/nfb: not in enabled drivers build config 00:09:34.304 net/nfp: not in enabled drivers build config 00:09:34.304 net/ngbe: not in enabled drivers build config 00:09:34.304 net/null: not in enabled drivers build config 00:09:34.304 net/octeontx: not in enabled drivers build config 00:09:34.304 net/octeon_ep: not in enabled drivers build config 00:09:34.304 net/pcap: not in enabled drivers build config 00:09:34.304 net/pfe: not in enabled drivers build config 00:09:34.304 net/qede: not in enabled drivers build config 00:09:34.304 net/ring: not in enabled drivers build config 00:09:34.304 net/sfc: not in enabled drivers build config 00:09:34.304 net/softnic: not in enabled drivers build config 00:09:34.304 net/tap: not in enabled drivers build config 00:09:34.304 net/thunderx: not in enabled drivers build config 00:09:34.304 net/txgbe: not in enabled drivers build config 00:09:34.304 net/vdev_netvsc: not in enabled drivers build config 00:09:34.304 net/vhost: not in enabled drivers build config 00:09:34.304 net/virtio: not in enabled drivers build config 00:09:34.304 net/vmxnet3: not in enabled drivers build config 00:09:34.304 raw/*: missing internal dependency, "rawdev" 00:09:34.304 crypto/armv8: not in enabled drivers build config 00:09:34.304 crypto/bcmfs: not in enabled drivers build config 00:09:34.304 crypto/caam_jr: not in enabled drivers build config 00:09:34.304 crypto/ccp: not in enabled drivers build config 00:09:34.304 crypto/cnxk: not in enabled drivers build config 00:09:34.304 crypto/dpaa_sec: not in enabled drivers build config 00:09:34.304 crypto/dpaa2_sec: not in enabled drivers build config 00:09:34.304 crypto/ipsec_mb: not in enabled drivers build config 00:09:34.304 crypto/mlx5: not in enabled drivers build config 00:09:34.304 crypto/mvsam: not in enabled drivers build config 00:09:34.304 crypto/nitrox: not in enabled drivers build config 00:09:34.304 crypto/null: not in enabled drivers build config 00:09:34.304 crypto/octeontx: not in enabled drivers build config 00:09:34.304 crypto/openssl: not in enabled drivers build config 00:09:34.304 crypto/scheduler: not in enabled drivers build config 00:09:34.304 crypto/uadk: not in enabled drivers build config 00:09:34.304 crypto/virtio: not in enabled drivers build config 00:09:34.304 compress/isal: not in enabled drivers build config 00:09:34.304 compress/mlx5: not in enabled drivers build config 00:09:34.304 compress/octeontx: not in enabled drivers build config 00:09:34.304 compress/zlib: not in enabled drivers build config 00:09:34.304 regex/*: missing internal dependency, "regexdev" 00:09:34.304 ml/*: missing internal dependency, "mldev" 00:09:34.304 vdpa/ifc: not in enabled drivers build config 00:09:34.304 vdpa/mlx5: not in enabled drivers build config 00:09:34.304 vdpa/nfp: not in enabled drivers build config 00:09:34.304 vdpa/sfc: not in enabled drivers build config 00:09:34.304 event/*: missing internal dependency, "eventdev" 00:09:34.304 baseband/*: missing internal dependency, "bbdev" 00:09:34.304 gpu/*: missing internal dependency, "gpudev" 00:09:34.304 00:09:34.304 00:09:34.304 Build targets in project: 85 00:09:34.304 00:09:34.304 DPDK 23.11.0 00:09:34.304 00:09:34.304 User defined options 00:09:34.304 buildtype : debug 00:09:34.304 default_library : shared 00:09:34.304 libdir : lib 00:09:34.304 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:34.304 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:09:34.304 c_link_args : 00:09:34.304 cpu_instruction_set: native 00:09:34.304 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:09:34.304 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:09:34.304 enable_docs : false 00:09:34.304 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:09:34.304 enable_kmods : false 00:09:34.304 tests : false 00:09:34.304 00:09:34.304 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:34.871 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:09:34.871 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:34.872 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:34.872 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:34.872 [4/265] Linking static target lib/librte_kvargs.a 00:09:34.872 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:34.872 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:34.872 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:34.872 [8/265] Linking static target lib/librte_log.a 00:09:34.872 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:35.130 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:35.390 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:35.648 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:35.956 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:35.956 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:35.956 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:35.956 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:35.956 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:35.956 [18/265] Linking static target lib/librte_telemetry.a 00:09:35.956 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:35.956 [20/265] Linking target lib/librte_log.so.24.0 00:09:35.956 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:35.956 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:36.214 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:36.214 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:09:36.214 [25/265] Linking target lib/librte_kvargs.so.24.0 00:09:36.472 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:36.472 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:09:36.472 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:36.730 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:36.730 [30/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:36.730 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:36.987 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:36.987 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:36.987 [34/265] Linking target lib/librte_telemetry.so.24.0 00:09:36.987 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:36.987 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:37.245 [37/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:09:37.245 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:37.245 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:37.245 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:37.503 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:37.503 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:37.503 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:37.503 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:37.761 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:37.761 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:37.761 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:38.020 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:38.020 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:38.278 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:38.278 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:38.278 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:38.537 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:38.537 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:38.537 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:38.537 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:38.537 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:38.796 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:38.796 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:38.796 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:38.796 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:39.055 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:39.313 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:39.313 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:39.313 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:39.313 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:39.313 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:39.571 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:39.829 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:39.829 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:39.829 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:39.829 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:39.829 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:39.829 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:39.829 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:39.829 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:40.147 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:40.147 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:40.409 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:40.409 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:40.667 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:40.667 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:40.927 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:40.927 [84/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:40.927 [85/265] Linking static target lib/librte_rcu.a 00:09:40.927 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:40.927 [87/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:40.927 [88/265] Linking static target lib/librte_ring.a 00:09:40.927 [89/265] Linking static target lib/librte_eal.a 00:09:41.185 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:41.185 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:41.443 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:41.443 [93/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.443 [94/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.443 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:41.443 [96/265] Linking static target lib/librte_mempool.a 00:09:42.009 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:42.009 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:42.009 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:09:42.009 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:09:42.009 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:42.009 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:42.009 [103/265] Linking static target lib/librte_mbuf.a 00:09:42.267 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:42.526 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:42.785 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:42.785 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:42.785 [108/265] Linking static target lib/librte_meter.a 00:09:42.785 [109/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:42.785 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:42.785 [111/265] Linking static target lib/librte_net.a 00:09:43.043 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:43.043 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:43.301 [114/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:43.301 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:43.301 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:43.301 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:43.559 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:43.840 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:43.840 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:44.112 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:44.112 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:44.370 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:44.370 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:44.370 [125/265] Linking static target lib/librte_pci.a 00:09:44.629 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:44.629 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:44.629 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:44.629 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:44.629 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:44.629 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:44.629 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:44.629 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:44.888 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:44.888 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:44.888 [136/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:44.888 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:44.888 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:44.888 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:44.888 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:44.888 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:44.888 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:44.888 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:44.888 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:45.145 [145/265] Linking static target lib/librte_ethdev.a 00:09:45.145 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:45.403 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:45.403 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:45.662 [149/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:45.662 [150/265] Linking static target lib/librte_cmdline.a 00:09:45.662 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:45.920 [152/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:45.920 [153/265] Linking static target lib/librte_timer.a 00:09:45.920 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:45.920 [155/265] Linking static target lib/librte_compressdev.a 00:09:45.920 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:46.178 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:46.178 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:46.178 [159/265] Linking static target lib/librte_hash.a 00:09:46.178 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:46.437 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:46.437 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:46.695 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:46.955 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:46.955 [165/265] Linking static target lib/librte_dmadev.a 00:09:46.955 [166/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:46.955 [167/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:46.955 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:46.955 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:46.955 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:47.213 [171/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:47.213 [172/265] Linking static target lib/librte_cryptodev.a 00:09:47.213 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:47.213 [174/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:47.471 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:47.731 [176/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:47.731 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:47.731 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:47.731 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:47.731 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:47.731 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:48.299 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:48.299 [183/265] Linking static target lib/librte_power.a 00:09:48.299 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:48.299 [185/265] Linking static target lib/librte_reorder.a 00:09:48.558 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:48.558 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:48.558 [188/265] Linking static target lib/librte_security.a 00:09:48.558 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:48.558 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:48.816 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:48.816 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:49.074 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:49.333 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:49.333 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:49.592 [196/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:49.592 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:49.592 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:49.592 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:49.850 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:49.850 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:49.850 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:50.108 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:50.108 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:50.108 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:50.108 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:50.367 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:50.367 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:50.367 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:50.367 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:50.367 [211/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:50.367 [212/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:50.367 [213/265] Linking static target drivers/librte_bus_pci.a 00:09:50.625 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:50.625 [215/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:50.625 [216/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:50.625 [217/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:50.625 [218/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:50.625 [219/265] Linking static target drivers/librte_bus_vdev.a 00:09:50.625 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:50.625 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:50.625 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:50.884 [223/265] Linking static target drivers/librte_mempool_ring.a 00:09:50.884 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:50.884 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:51.451 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:51.451 [227/265] Linking static target lib/librte_vhost.a 00:09:52.464 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:52.464 [229/265] Linking target lib/librte_eal.so.24.0 00:09:52.464 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:09:52.723 [231/265] Linking target lib/librte_pci.so.24.0 00:09:52.723 [232/265] Linking target lib/librte_timer.so.24.0 00:09:52.723 [233/265] Linking target lib/librte_dmadev.so.24.0 00:09:52.723 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:09:52.723 [235/265] Linking target lib/librte_meter.so.24.0 00:09:52.723 [236/265] Linking target lib/librte_ring.so.24.0 00:09:52.723 [237/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:52.723 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:09:52.723 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:09:52.723 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:09:52.723 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:09:52.723 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:09:52.723 [243/265] Linking target lib/librte_mempool.so.24.0 00:09:52.723 [244/265] Linking target lib/librte_rcu.so.24.0 00:09:52.723 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:09:52.982 [246/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:52.982 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:09:52.982 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:09:52.982 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:09:52.982 [250/265] Linking target lib/librte_mbuf.so.24.0 00:09:53.240 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:09:53.240 [252/265] Linking target lib/librte_net.so.24.0 00:09:53.240 [253/265] Linking target lib/librte_reorder.so.24.0 00:09:53.240 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:09:53.240 [255/265] Linking target lib/librte_compressdev.so.24.0 00:09:53.498 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:09:53.498 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:09:53.498 [258/265] Linking target lib/librte_hash.so.24.0 00:09:53.498 [259/265] Linking target lib/librte_cmdline.so.24.0 00:09:53.498 [260/265] Linking target lib/librte_security.so.24.0 00:09:53.498 [261/265] Linking target lib/librte_ethdev.so.24.0 00:09:53.498 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:09:53.756 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:09:53.756 [264/265] Linking target lib/librte_power.so.24.0 00:09:53.756 [265/265] Linking target lib/librte_vhost.so.24.0 00:09:53.756 INFO: autodetecting backend as ninja 00:09:53.756 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:55.132 CC lib/log/log.o 00:09:55.132 CC lib/log/log_flags.o 00:09:55.132 CC lib/log/log_deprecated.o 00:09:55.132 CC lib/ut_mock/mock.o 00:09:55.132 CC lib/ut/ut.o 00:09:55.132 LIB libspdk_ut_mock.a 00:09:55.132 SO libspdk_ut_mock.so.6.0 00:09:55.132 LIB libspdk_ut.a 00:09:55.132 LIB libspdk_log.a 00:09:55.132 SO libspdk_ut.so.2.0 00:09:55.133 SYMLINK libspdk_ut_mock.so 00:09:55.133 SO libspdk_log.so.7.0 00:09:55.133 SYMLINK libspdk_ut.so 00:09:55.133 SYMLINK libspdk_log.so 00:09:55.392 CC lib/dma/dma.o 00:09:55.392 CXX lib/trace_parser/trace.o 00:09:55.392 CC lib/ioat/ioat.o 00:09:55.392 CC lib/util/base64.o 00:09:55.392 CC lib/util/bit_array.o 00:09:55.392 CC lib/util/cpuset.o 00:09:55.392 CC lib/util/crc16.o 00:09:55.392 CC lib/util/crc32.o 00:09:55.392 CC lib/util/crc32c.o 00:09:55.651 CC lib/vfio_user/host/vfio_user_pci.o 00:09:55.651 CC lib/vfio_user/host/vfio_user.o 00:09:55.651 CC lib/util/crc32_ieee.o 00:09:55.651 CC lib/util/crc64.o 00:09:55.651 CC lib/util/dif.o 00:09:55.651 LIB libspdk_dma.a 00:09:55.651 CC lib/util/fd.o 00:09:55.651 SO libspdk_dma.so.4.0 00:09:55.651 LIB libspdk_ioat.a 00:09:55.651 CC lib/util/file.o 00:09:55.651 SO libspdk_ioat.so.7.0 00:09:55.651 SYMLINK libspdk_dma.so 00:09:55.651 CC lib/util/hexlify.o 00:09:55.651 CC lib/util/iov.o 00:09:55.651 CC lib/util/math.o 00:09:55.651 SYMLINK libspdk_ioat.so 00:09:55.908 CC lib/util/pipe.o 00:09:55.908 CC lib/util/strerror_tls.o 00:09:55.908 CC lib/util/string.o 00:09:55.908 CC lib/util/uuid.o 00:09:55.908 LIB libspdk_vfio_user.a 00:09:55.908 CC lib/util/fd_group.o 00:09:55.908 SO libspdk_vfio_user.so.5.0 00:09:55.908 CC lib/util/xor.o 00:09:55.908 CC lib/util/zipf.o 00:09:55.908 SYMLINK libspdk_vfio_user.so 00:09:56.165 LIB libspdk_util.a 00:09:56.165 SO libspdk_util.so.9.0 00:09:56.424 SYMLINK libspdk_util.so 00:09:56.424 LIB libspdk_trace_parser.a 00:09:56.424 SO libspdk_trace_parser.so.5.0 00:09:56.687 SYMLINK libspdk_trace_parser.so 00:09:56.687 CC lib/conf/conf.o 00:09:56.687 CC lib/vmd/vmd.o 00:09:56.687 CC lib/vmd/led.o 00:09:56.687 CC lib/idxd/idxd.o 00:09:56.687 CC lib/rdma/common.o 00:09:56.687 CC lib/rdma/rdma_verbs.o 00:09:56.687 CC lib/idxd/idxd_user.o 00:09:56.687 CC lib/json/json_parse.o 00:09:56.687 CC lib/env_dpdk/env.o 00:09:56.687 CC lib/json/json_util.o 00:09:56.687 CC lib/json/json_write.o 00:09:56.955 LIB libspdk_conf.a 00:09:56.955 CC lib/env_dpdk/memory.o 00:09:56.955 SO libspdk_conf.so.6.0 00:09:56.955 CC lib/env_dpdk/pci.o 00:09:56.955 SYMLINK libspdk_conf.so 00:09:56.955 CC lib/env_dpdk/init.o 00:09:56.955 CC lib/env_dpdk/threads.o 00:09:56.955 LIB libspdk_rdma.a 00:09:56.955 CC lib/env_dpdk/pci_ioat.o 00:09:56.955 SO libspdk_rdma.so.6.0 00:09:56.955 LIB libspdk_json.a 00:09:56.955 SYMLINK libspdk_rdma.so 00:09:56.955 CC lib/env_dpdk/pci_virtio.o 00:09:56.955 CC lib/env_dpdk/pci_vmd.o 00:09:57.221 SO libspdk_json.so.6.0 00:09:57.221 CC lib/env_dpdk/pci_idxd.o 00:09:57.221 LIB libspdk_idxd.a 00:09:57.221 SYMLINK libspdk_json.so 00:09:57.221 SO libspdk_idxd.so.12.0 00:09:57.221 CC lib/env_dpdk/pci_event.o 00:09:57.221 CC lib/env_dpdk/sigbus_handler.o 00:09:57.221 SYMLINK libspdk_idxd.so 00:09:57.221 CC lib/env_dpdk/pci_dpdk.o 00:09:57.221 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:57.221 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:57.221 LIB libspdk_vmd.a 00:09:57.221 SO libspdk_vmd.so.6.0 00:09:57.490 CC lib/jsonrpc/jsonrpc_server.o 00:09:57.490 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:57.490 CC lib/jsonrpc/jsonrpc_client.o 00:09:57.490 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:57.490 SYMLINK libspdk_vmd.so 00:09:57.760 LIB libspdk_jsonrpc.a 00:09:57.760 SO libspdk_jsonrpc.so.6.0 00:09:57.760 SYMLINK libspdk_jsonrpc.so 00:09:58.032 CC lib/rpc/rpc.o 00:09:58.032 LIB libspdk_env_dpdk.a 00:09:58.032 SO libspdk_env_dpdk.so.14.0 00:09:58.307 LIB libspdk_rpc.a 00:09:58.307 SO libspdk_rpc.so.6.0 00:09:58.307 SYMLINK libspdk_env_dpdk.so 00:09:58.570 SYMLINK libspdk_rpc.so 00:09:58.570 CC lib/keyring/keyring.o 00:09:58.570 CC lib/keyring/keyring_rpc.o 00:09:58.570 CC lib/trace/trace.o 00:09:58.570 CC lib/trace/trace_flags.o 00:09:58.570 CC lib/trace/trace_rpc.o 00:09:58.570 CC lib/notify/notify.o 00:09:58.570 CC lib/notify/notify_rpc.o 00:09:58.828 LIB libspdk_notify.a 00:09:58.828 LIB libspdk_trace.a 00:09:58.828 SO libspdk_notify.so.6.0 00:09:58.828 SO libspdk_trace.so.10.0 00:09:59.087 SYMLINK libspdk_notify.so 00:09:59.087 LIB libspdk_keyring.a 00:09:59.087 SO libspdk_keyring.so.1.0 00:09:59.087 SYMLINK libspdk_trace.so 00:09:59.087 SYMLINK libspdk_keyring.so 00:09:59.360 CC lib/sock/sock.o 00:09:59.360 CC lib/sock/sock_rpc.o 00:09:59.360 CC lib/thread/thread.o 00:09:59.360 CC lib/thread/iobuf.o 00:09:59.963 LIB libspdk_sock.a 00:09:59.963 SO libspdk_sock.so.9.0 00:09:59.963 SYMLINK libspdk_sock.so 00:10:00.222 CC lib/nvme/nvme_ctrlr_cmd.o 00:10:00.222 CC lib/nvme/nvme_ctrlr.o 00:10:00.222 CC lib/nvme/nvme_fabric.o 00:10:00.222 CC lib/nvme/nvme_ns_cmd.o 00:10:00.222 CC lib/nvme/nvme_ns.o 00:10:00.222 CC lib/nvme/nvme_pcie.o 00:10:00.222 CC lib/nvme/nvme_qpair.o 00:10:00.222 CC lib/nvme/nvme_pcie_common.o 00:10:00.222 CC lib/nvme/nvme.o 00:10:00.789 LIB libspdk_thread.a 00:10:00.789 SO libspdk_thread.so.10.0 00:10:01.047 SYMLINK libspdk_thread.so 00:10:01.047 CC lib/nvme/nvme_quirks.o 00:10:01.047 CC lib/nvme/nvme_transport.o 00:10:01.047 CC lib/nvme/nvme_discovery.o 00:10:01.047 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:01.307 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:01.307 CC lib/nvme/nvme_tcp.o 00:10:01.307 CC lib/accel/accel.o 00:10:01.307 CC lib/accel/accel_rpc.o 00:10:01.566 CC lib/accel/accel_sw.o 00:10:01.566 CC lib/nvme/nvme_opal.o 00:10:01.566 CC lib/nvme/nvme_io_msg.o 00:10:01.824 CC lib/nvme/nvme_poll_group.o 00:10:01.824 CC lib/nvme/nvme_zns.o 00:10:01.824 CC lib/nvme/nvme_stubs.o 00:10:02.082 CC lib/blob/blobstore.o 00:10:02.082 CC lib/nvme/nvme_auth.o 00:10:02.082 CC lib/init/json_config.o 00:10:02.341 LIB libspdk_accel.a 00:10:02.341 CC lib/init/subsystem.o 00:10:02.341 CC lib/blob/request.o 00:10:02.341 CC lib/blob/zeroes.o 00:10:02.341 SO libspdk_accel.so.15.0 00:10:02.599 CC lib/nvme/nvme_cuse.o 00:10:02.599 SYMLINK libspdk_accel.so 00:10:02.599 CC lib/init/subsystem_rpc.o 00:10:02.599 CC lib/virtio/virtio.o 00:10:02.599 CC lib/virtio/virtio_vhost_user.o 00:10:02.599 CC lib/nvme/nvme_rdma.o 00:10:02.599 CC lib/init/rpc.o 00:10:02.907 CC lib/blob/blob_bs_dev.o 00:10:02.907 CC lib/bdev/bdev.o 00:10:02.907 CC lib/bdev/bdev_rpc.o 00:10:02.907 LIB libspdk_init.a 00:10:02.907 CC lib/virtio/virtio_vfio_user.o 00:10:02.907 SO libspdk_init.so.5.0 00:10:02.907 CC lib/virtio/virtio_pci.o 00:10:02.907 CC lib/bdev/bdev_zone.o 00:10:02.907 SYMLINK libspdk_init.so 00:10:02.907 CC lib/bdev/part.o 00:10:03.166 CC lib/bdev/scsi_nvme.o 00:10:03.166 LIB libspdk_virtio.a 00:10:03.166 SO libspdk_virtio.so.7.0 00:10:03.166 CC lib/event/app.o 00:10:03.166 CC lib/event/reactor.o 00:10:03.166 CC lib/event/log_rpc.o 00:10:03.166 CC lib/event/app_rpc.o 00:10:03.166 CC lib/event/scheduler_static.o 00:10:03.425 SYMLINK libspdk_virtio.so 00:10:03.683 LIB libspdk_event.a 00:10:03.943 SO libspdk_event.so.13.0 00:10:03.943 SYMLINK libspdk_event.so 00:10:03.943 LIB libspdk_nvme.a 00:10:04.201 SO libspdk_nvme.so.13.0 00:10:04.460 SYMLINK libspdk_nvme.so 00:10:05.027 LIB libspdk_blob.a 00:10:05.027 SO libspdk_blob.so.11.0 00:10:05.286 SYMLINK libspdk_blob.so 00:10:05.545 CC lib/blobfs/blobfs.o 00:10:05.545 CC lib/lvol/lvol.o 00:10:05.545 CC lib/blobfs/tree.o 00:10:05.545 LIB libspdk_bdev.a 00:10:05.545 SO libspdk_bdev.so.15.0 00:10:05.806 SYMLINK libspdk_bdev.so 00:10:06.065 CC lib/ftl/ftl_core.o 00:10:06.065 CC lib/ftl/ftl_init.o 00:10:06.065 CC lib/nvmf/ctrlr.o 00:10:06.065 CC lib/nvmf/ctrlr_discovery.o 00:10:06.065 CC lib/ftl/ftl_layout.o 00:10:06.065 CC lib/scsi/dev.o 00:10:06.065 CC lib/ublk/ublk.o 00:10:06.065 CC lib/nbd/nbd.o 00:10:06.325 CC lib/nbd/nbd_rpc.o 00:10:06.325 CC lib/scsi/lun.o 00:10:06.325 CC lib/nvmf/ctrlr_bdev.o 00:10:06.325 LIB libspdk_blobfs.a 00:10:06.325 CC lib/nvmf/subsystem.o 00:10:06.325 LIB libspdk_nbd.a 00:10:06.325 SO libspdk_blobfs.so.10.0 00:10:06.325 CC lib/ftl/ftl_debug.o 00:10:06.325 SO libspdk_nbd.so.7.0 00:10:06.593 SYMLINK libspdk_blobfs.so 00:10:06.593 SYMLINK libspdk_nbd.so 00:10:06.593 CC lib/ublk/ublk_rpc.o 00:10:06.593 CC lib/ftl/ftl_io.o 00:10:06.593 CC lib/nvmf/nvmf.o 00:10:06.593 CC lib/scsi/port.o 00:10:06.593 LIB libspdk_lvol.a 00:10:06.593 CC lib/nvmf/nvmf_rpc.o 00:10:06.593 SO libspdk_lvol.so.10.0 00:10:06.865 CC lib/ftl/ftl_sb.o 00:10:06.865 SYMLINK libspdk_lvol.so 00:10:06.865 CC lib/ftl/ftl_l2p.o 00:10:06.865 LIB libspdk_ublk.a 00:10:06.865 SO libspdk_ublk.so.3.0 00:10:06.865 CC lib/scsi/scsi.o 00:10:06.865 CC lib/ftl/ftl_l2p_flat.o 00:10:06.865 SYMLINK libspdk_ublk.so 00:10:06.865 CC lib/ftl/ftl_nv_cache.o 00:10:06.865 CC lib/nvmf/transport.o 00:10:06.865 CC lib/nvmf/tcp.o 00:10:07.122 CC lib/scsi/scsi_bdev.o 00:10:07.122 CC lib/ftl/ftl_band.o 00:10:07.122 CC lib/nvmf/rdma.o 00:10:07.380 CC lib/ftl/ftl_band_ops.o 00:10:07.380 CC lib/ftl/ftl_writer.o 00:10:07.639 CC lib/ftl/ftl_rq.o 00:10:07.639 CC lib/scsi/scsi_pr.o 00:10:07.639 CC lib/scsi/scsi_rpc.o 00:10:07.639 CC lib/ftl/ftl_reloc.o 00:10:07.639 CC lib/ftl/ftl_l2p_cache.o 00:10:07.898 CC lib/scsi/task.o 00:10:07.898 CC lib/ftl/ftl_p2l.o 00:10:07.898 CC lib/ftl/mngt/ftl_mngt.o 00:10:07.898 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:07.898 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:07.898 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:07.898 LIB libspdk_scsi.a 00:10:08.158 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:08.158 SO libspdk_scsi.so.9.0 00:10:08.158 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:08.158 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:08.158 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:08.158 SYMLINK libspdk_scsi.so 00:10:08.158 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:08.158 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:08.158 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:08.418 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:08.418 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:08.677 CC lib/ftl/utils/ftl_conf.o 00:10:08.677 CC lib/ftl/utils/ftl_md.o 00:10:08.677 CC lib/ftl/utils/ftl_mempool.o 00:10:08.677 CC lib/ftl/utils/ftl_bitmap.o 00:10:08.677 CC lib/iscsi/conn.o 00:10:08.677 CC lib/vhost/vhost.o 00:10:08.936 CC lib/iscsi/init_grp.o 00:10:08.936 CC lib/vhost/vhost_rpc.o 00:10:08.936 CC lib/ftl/utils/ftl_property.o 00:10:08.936 CC lib/iscsi/iscsi.o 00:10:08.936 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:08.936 CC lib/vhost/vhost_scsi.o 00:10:09.194 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:09.194 CC lib/iscsi/md5.o 00:10:09.194 CC lib/iscsi/param.o 00:10:09.194 CC lib/iscsi/portal_grp.o 00:10:09.453 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:09.453 CC lib/iscsi/tgt_node.o 00:10:09.453 CC lib/vhost/vhost_blk.o 00:10:09.453 CC lib/vhost/rte_vhost_user.o 00:10:09.453 CC lib/iscsi/iscsi_subsystem.o 00:10:09.711 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:09.711 CC lib/iscsi/iscsi_rpc.o 00:10:09.711 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:09.969 CC lib/iscsi/task.o 00:10:09.969 LIB libspdk_nvmf.a 00:10:09.969 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:09.969 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:09.969 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:09.969 SO libspdk_nvmf.so.18.0 00:10:10.227 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:10.227 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:10.227 CC lib/ftl/base/ftl_base_dev.o 00:10:10.227 CC lib/ftl/base/ftl_base_bdev.o 00:10:10.227 SYMLINK libspdk_nvmf.so 00:10:10.227 CC lib/ftl/ftl_trace.o 00:10:10.485 LIB libspdk_iscsi.a 00:10:10.485 SO libspdk_iscsi.so.8.0 00:10:10.751 LIB libspdk_ftl.a 00:10:10.751 LIB libspdk_vhost.a 00:10:10.751 SYMLINK libspdk_iscsi.so 00:10:10.751 SO libspdk_ftl.so.9.0 00:10:10.751 SO libspdk_vhost.so.8.0 00:10:11.009 SYMLINK libspdk_vhost.so 00:10:11.267 SYMLINK libspdk_ftl.so 00:10:11.525 CC module/env_dpdk/env_dpdk_rpc.o 00:10:11.783 CC module/accel/dsa/accel_dsa.o 00:10:11.783 CC module/sock/posix/posix.o 00:10:11.783 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:11.783 CC module/blob/bdev/blob_bdev.o 00:10:11.783 CC module/accel/error/accel_error.o 00:10:11.783 CC module/accel/ioat/accel_ioat.o 00:10:11.783 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:11.783 CC module/accel/iaa/accel_iaa.o 00:10:11.783 CC module/keyring/file/keyring.o 00:10:11.783 LIB libspdk_env_dpdk_rpc.a 00:10:11.783 SO libspdk_env_dpdk_rpc.so.6.0 00:10:11.783 SYMLINK libspdk_env_dpdk_rpc.so 00:10:11.783 CC module/accel/iaa/accel_iaa_rpc.o 00:10:11.783 CC module/keyring/file/keyring_rpc.o 00:10:11.783 LIB libspdk_scheduler_dpdk_governor.a 00:10:12.041 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:12.041 CC module/accel/ioat/accel_ioat_rpc.o 00:10:12.041 CC module/accel/error/accel_error_rpc.o 00:10:12.041 LIB libspdk_scheduler_dynamic.a 00:10:12.041 SO libspdk_scheduler_dynamic.so.4.0 00:10:12.041 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:12.041 LIB libspdk_accel_iaa.a 00:10:12.041 LIB libspdk_blob_bdev.a 00:10:12.041 SYMLINK libspdk_scheduler_dynamic.so 00:10:12.041 SO libspdk_accel_iaa.so.3.0 00:10:12.041 LIB libspdk_accel_ioat.a 00:10:12.041 LIB libspdk_keyring_file.a 00:10:12.041 SO libspdk_blob_bdev.so.11.0 00:10:12.041 CC module/accel/dsa/accel_dsa_rpc.o 00:10:12.041 LIB libspdk_accel_error.a 00:10:12.041 SO libspdk_keyring_file.so.1.0 00:10:12.041 SO libspdk_accel_ioat.so.6.0 00:10:12.041 SYMLINK libspdk_accel_iaa.so 00:10:12.041 SO libspdk_accel_error.so.2.0 00:10:12.041 SYMLINK libspdk_blob_bdev.so 00:10:12.299 CC module/sock/uring/uring.o 00:10:12.299 SYMLINK libspdk_keyring_file.so 00:10:12.299 SYMLINK libspdk_accel_ioat.so 00:10:12.299 LIB libspdk_accel_dsa.a 00:10:12.299 SYMLINK libspdk_accel_error.so 00:10:12.299 CC module/scheduler/gscheduler/gscheduler.o 00:10:12.299 SO libspdk_accel_dsa.so.5.0 00:10:12.299 SYMLINK libspdk_accel_dsa.so 00:10:12.557 CC module/bdev/gpt/gpt.o 00:10:12.557 LIB libspdk_scheduler_gscheduler.a 00:10:12.557 CC module/bdev/malloc/bdev_malloc.o 00:10:12.557 CC module/bdev/lvol/vbdev_lvol.o 00:10:12.557 CC module/bdev/delay/vbdev_delay.o 00:10:12.557 CC module/blobfs/bdev/blobfs_bdev.o 00:10:12.557 CC module/bdev/error/vbdev_error.o 00:10:12.557 SO libspdk_scheduler_gscheduler.so.4.0 00:10:12.557 CC module/bdev/null/bdev_null.o 00:10:12.557 LIB libspdk_sock_posix.a 00:10:12.557 SO libspdk_sock_posix.so.6.0 00:10:12.557 SYMLINK libspdk_scheduler_gscheduler.so 00:10:12.557 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:12.816 SYMLINK libspdk_sock_posix.so 00:10:12.816 CC module/bdev/gpt/vbdev_gpt.o 00:10:12.816 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:12.816 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:13.075 CC module/bdev/error/vbdev_error_rpc.o 00:10:13.075 CC module/bdev/null/bdev_null_rpc.o 00:10:13.075 CC module/bdev/nvme/bdev_nvme.o 00:10:13.075 LIB libspdk_sock_uring.a 00:10:13.075 LIB libspdk_blobfs_bdev.a 00:10:13.075 SO libspdk_sock_uring.so.5.0 00:10:13.075 LIB libspdk_bdev_delay.a 00:10:13.075 SO libspdk_blobfs_bdev.so.6.0 00:10:13.075 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:13.333 LIB libspdk_bdev_error.a 00:10:13.333 SYMLINK libspdk_sock_uring.so 00:10:13.333 SO libspdk_bdev_delay.so.6.0 00:10:13.333 LIB libspdk_bdev_null.a 00:10:13.333 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:13.333 SO libspdk_bdev_error.so.6.0 00:10:13.333 SYMLINK libspdk_blobfs_bdev.so 00:10:13.333 SO libspdk_bdev_null.so.6.0 00:10:13.333 SYMLINK libspdk_bdev_delay.so 00:10:13.333 LIB libspdk_bdev_gpt.a 00:10:13.333 SYMLINK libspdk_bdev_error.so 00:10:13.333 SYMLINK libspdk_bdev_null.so 00:10:13.333 LIB libspdk_bdev_malloc.a 00:10:13.333 SO libspdk_bdev_gpt.so.6.0 00:10:13.333 SO libspdk_bdev_malloc.so.6.0 00:10:13.333 LIB libspdk_bdev_lvol.a 00:10:13.591 CC module/bdev/passthru/vbdev_passthru.o 00:10:13.591 SYMLINK libspdk_bdev_gpt.so 00:10:13.591 CC module/bdev/raid/bdev_raid.o 00:10:13.591 SYMLINK libspdk_bdev_malloc.so 00:10:13.591 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:13.591 SO libspdk_bdev_lvol.so.6.0 00:10:13.591 CC module/bdev/split/vbdev_split.o 00:10:13.591 CC module/bdev/uring/bdev_uring.o 00:10:13.591 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:13.591 SYMLINK libspdk_bdev_lvol.so 00:10:13.591 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:13.591 CC module/bdev/aio/bdev_aio.o 00:10:13.906 LIB libspdk_bdev_passthru.a 00:10:13.906 CC module/bdev/split/vbdev_split_rpc.o 00:10:13.906 SO libspdk_bdev_passthru.so.6.0 00:10:13.906 CC module/bdev/ftl/bdev_ftl.o 00:10:13.906 CC module/bdev/uring/bdev_uring_rpc.o 00:10:13.906 SYMLINK libspdk_bdev_passthru.so 00:10:13.906 CC module/bdev/nvme/nvme_rpc.o 00:10:13.906 CC module/bdev/iscsi/bdev_iscsi.o 00:10:14.165 LIB libspdk_bdev_zone_block.a 00:10:14.165 CC module/bdev/aio/bdev_aio_rpc.o 00:10:14.165 LIB libspdk_bdev_split.a 00:10:14.165 SO libspdk_bdev_zone_block.so.6.0 00:10:14.165 SO libspdk_bdev_split.so.6.0 00:10:14.165 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:14.165 SYMLINK libspdk_bdev_zone_block.so 00:10:14.165 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:14.165 SYMLINK libspdk_bdev_split.so 00:10:14.165 CC module/bdev/raid/bdev_raid_rpc.o 00:10:14.165 LIB libspdk_bdev_uring.a 00:10:14.165 SO libspdk_bdev_uring.so.6.0 00:10:14.165 LIB libspdk_bdev_aio.a 00:10:14.165 CC module/bdev/nvme/bdev_mdns_client.o 00:10:14.165 SO libspdk_bdev_aio.so.6.0 00:10:14.446 SYMLINK libspdk_bdev_uring.so 00:10:14.446 CC module/bdev/nvme/vbdev_opal.o 00:10:14.446 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:14.446 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:14.446 SYMLINK libspdk_bdev_aio.so 00:10:14.446 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:14.446 LIB libspdk_bdev_iscsi.a 00:10:14.446 SO libspdk_bdev_iscsi.so.6.0 00:10:14.446 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:14.446 SYMLINK libspdk_bdev_iscsi.so 00:10:14.446 CC module/bdev/raid/bdev_raid_sb.o 00:10:14.446 CC module/bdev/raid/raid0.o 00:10:14.446 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:14.446 CC module/bdev/raid/raid1.o 00:10:14.760 LIB libspdk_bdev_ftl.a 00:10:14.760 CC module/bdev/raid/concat.o 00:10:14.760 SO libspdk_bdev_ftl.so.6.0 00:10:14.760 LIB libspdk_bdev_virtio.a 00:10:14.760 SYMLINK libspdk_bdev_ftl.so 00:10:14.760 SO libspdk_bdev_virtio.so.6.0 00:10:15.018 SYMLINK libspdk_bdev_virtio.so 00:10:15.018 LIB libspdk_bdev_raid.a 00:10:15.018 SO libspdk_bdev_raid.so.6.0 00:10:15.018 SYMLINK libspdk_bdev_raid.so 00:10:15.583 LIB libspdk_bdev_nvme.a 00:10:15.875 SO libspdk_bdev_nvme.so.7.0 00:10:15.875 SYMLINK libspdk_bdev_nvme.so 00:10:16.149 CC module/event/subsystems/sock/sock.o 00:10:16.149 CC module/event/subsystems/keyring/keyring.o 00:10:16.149 CC module/event/subsystems/vmd/vmd.o 00:10:16.149 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:16.149 CC module/event/subsystems/iobuf/iobuf.o 00:10:16.149 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:16.474 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:16.474 CC module/event/subsystems/scheduler/scheduler.o 00:10:16.474 LIB libspdk_event_keyring.a 00:10:16.474 LIB libspdk_event_sock.a 00:10:16.474 LIB libspdk_event_vmd.a 00:10:16.474 LIB libspdk_event_scheduler.a 00:10:16.474 LIB libspdk_event_iobuf.a 00:10:16.474 LIB libspdk_event_vhost_blk.a 00:10:16.474 SO libspdk_event_keyring.so.1.0 00:10:16.474 SO libspdk_event_sock.so.5.0 00:10:16.474 SO libspdk_event_scheduler.so.4.0 00:10:16.474 SO libspdk_event_iobuf.so.3.0 00:10:16.474 SO libspdk_event_vmd.so.6.0 00:10:16.474 SO libspdk_event_vhost_blk.so.3.0 00:10:16.474 SYMLINK libspdk_event_sock.so 00:10:16.740 SYMLINK libspdk_event_keyring.so 00:10:16.740 SYMLINK libspdk_event_scheduler.so 00:10:16.740 SYMLINK libspdk_event_iobuf.so 00:10:16.740 SYMLINK libspdk_event_vmd.so 00:10:16.740 SYMLINK libspdk_event_vhost_blk.so 00:10:16.740 CC module/event/subsystems/accel/accel.o 00:10:16.998 LIB libspdk_event_accel.a 00:10:16.998 SO libspdk_event_accel.so.6.0 00:10:16.998 SYMLINK libspdk_event_accel.so 00:10:17.255 CC module/event/subsystems/bdev/bdev.o 00:10:17.514 LIB libspdk_event_bdev.a 00:10:17.514 SO libspdk_event_bdev.so.6.0 00:10:17.514 SYMLINK libspdk_event_bdev.so 00:10:17.771 CC module/event/subsystems/nbd/nbd.o 00:10:17.771 CC module/event/subsystems/ublk/ublk.o 00:10:17.771 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:17.771 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:17.771 CC module/event/subsystems/scsi/scsi.o 00:10:17.771 LIB libspdk_event_nbd.a 00:10:18.029 SO libspdk_event_nbd.so.6.0 00:10:18.029 LIB libspdk_event_ublk.a 00:10:18.029 LIB libspdk_event_scsi.a 00:10:18.029 SO libspdk_event_ublk.so.3.0 00:10:18.029 SO libspdk_event_scsi.so.6.0 00:10:18.029 SYMLINK libspdk_event_nbd.so 00:10:18.029 LIB libspdk_event_nvmf.a 00:10:18.029 SYMLINK libspdk_event_ublk.so 00:10:18.029 SYMLINK libspdk_event_scsi.so 00:10:18.029 SO libspdk_event_nvmf.so.6.0 00:10:18.029 SYMLINK libspdk_event_nvmf.so 00:10:18.287 CC module/event/subsystems/iscsi/iscsi.o 00:10:18.287 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:18.545 LIB libspdk_event_vhost_scsi.a 00:10:18.545 SO libspdk_event_vhost_scsi.so.3.0 00:10:18.545 LIB libspdk_event_iscsi.a 00:10:18.545 SO libspdk_event_iscsi.so.6.0 00:10:18.545 SYMLINK libspdk_event_vhost_scsi.so 00:10:18.545 SYMLINK libspdk_event_iscsi.so 00:10:18.545 SO libspdk.so.6.0 00:10:18.545 SYMLINK libspdk.so 00:10:18.803 CXX app/trace/trace.o 00:10:18.803 TEST_HEADER include/spdk/accel.h 00:10:18.803 TEST_HEADER include/spdk/accel_module.h 00:10:18.803 TEST_HEADER include/spdk/assert.h 00:10:18.804 TEST_HEADER include/spdk/barrier.h 00:10:18.804 TEST_HEADER include/spdk/base64.h 00:10:18.804 TEST_HEADER include/spdk/bdev.h 00:10:18.804 TEST_HEADER include/spdk/bdev_module.h 00:10:18.804 TEST_HEADER include/spdk/bdev_zone.h 00:10:18.804 TEST_HEADER include/spdk/bit_array.h 00:10:18.804 TEST_HEADER include/spdk/bit_pool.h 00:10:18.804 TEST_HEADER include/spdk/blob_bdev.h 00:10:18.804 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:18.804 TEST_HEADER include/spdk/blobfs.h 00:10:18.804 TEST_HEADER include/spdk/blob.h 00:10:18.804 TEST_HEADER include/spdk/conf.h 00:10:18.804 TEST_HEADER include/spdk/config.h 00:10:18.804 TEST_HEADER include/spdk/cpuset.h 00:10:18.804 TEST_HEADER include/spdk/crc16.h 00:10:19.062 TEST_HEADER include/spdk/crc32.h 00:10:19.062 TEST_HEADER include/spdk/crc64.h 00:10:19.062 TEST_HEADER include/spdk/dif.h 00:10:19.062 TEST_HEADER include/spdk/dma.h 00:10:19.062 TEST_HEADER include/spdk/endian.h 00:10:19.062 TEST_HEADER include/spdk/env_dpdk.h 00:10:19.062 TEST_HEADER include/spdk/env.h 00:10:19.062 TEST_HEADER include/spdk/event.h 00:10:19.062 TEST_HEADER include/spdk/fd_group.h 00:10:19.062 TEST_HEADER include/spdk/fd.h 00:10:19.062 TEST_HEADER include/spdk/file.h 00:10:19.062 TEST_HEADER include/spdk/ftl.h 00:10:19.062 TEST_HEADER include/spdk/gpt_spec.h 00:10:19.062 TEST_HEADER include/spdk/hexlify.h 00:10:19.062 TEST_HEADER include/spdk/histogram_data.h 00:10:19.062 TEST_HEADER include/spdk/idxd.h 00:10:19.062 TEST_HEADER include/spdk/idxd_spec.h 00:10:19.062 TEST_HEADER include/spdk/init.h 00:10:19.062 TEST_HEADER include/spdk/ioat.h 00:10:19.062 TEST_HEADER include/spdk/ioat_spec.h 00:10:19.062 TEST_HEADER include/spdk/iscsi_spec.h 00:10:19.062 TEST_HEADER include/spdk/json.h 00:10:19.062 TEST_HEADER include/spdk/jsonrpc.h 00:10:19.062 TEST_HEADER include/spdk/keyring.h 00:10:19.062 TEST_HEADER include/spdk/keyring_module.h 00:10:19.062 TEST_HEADER include/spdk/likely.h 00:10:19.062 CC examples/accel/perf/accel_perf.o 00:10:19.062 TEST_HEADER include/spdk/log.h 00:10:19.062 TEST_HEADER include/spdk/lvol.h 00:10:19.062 TEST_HEADER include/spdk/memory.h 00:10:19.062 TEST_HEADER include/spdk/mmio.h 00:10:19.062 TEST_HEADER include/spdk/nbd.h 00:10:19.062 TEST_HEADER include/spdk/notify.h 00:10:19.062 TEST_HEADER include/spdk/nvme.h 00:10:19.062 TEST_HEADER include/spdk/nvme_intel.h 00:10:19.062 CC test/blobfs/mkfs/mkfs.o 00:10:19.062 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:19.062 CC test/app/bdev_svc/bdev_svc.o 00:10:19.062 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:19.062 TEST_HEADER include/spdk/nvme_spec.h 00:10:19.062 TEST_HEADER include/spdk/nvme_zns.h 00:10:19.062 CC examples/bdev/hello_world/hello_bdev.o 00:10:19.062 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:19.062 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:19.062 CC test/bdev/bdevio/bdevio.o 00:10:19.062 CC test/accel/dif/dif.o 00:10:19.062 CC test/dma/test_dma/test_dma.o 00:10:19.062 TEST_HEADER include/spdk/nvmf.h 00:10:19.062 TEST_HEADER include/spdk/nvmf_spec.h 00:10:19.062 TEST_HEADER include/spdk/nvmf_transport.h 00:10:19.062 CC examples/blob/hello_world/hello_blob.o 00:10:19.062 TEST_HEADER include/spdk/opal.h 00:10:19.062 TEST_HEADER include/spdk/opal_spec.h 00:10:19.063 TEST_HEADER include/spdk/pci_ids.h 00:10:19.063 TEST_HEADER include/spdk/pipe.h 00:10:19.063 TEST_HEADER include/spdk/queue.h 00:10:19.063 TEST_HEADER include/spdk/reduce.h 00:10:19.063 TEST_HEADER include/spdk/rpc.h 00:10:19.063 TEST_HEADER include/spdk/scheduler.h 00:10:19.063 TEST_HEADER include/spdk/scsi.h 00:10:19.063 TEST_HEADER include/spdk/scsi_spec.h 00:10:19.063 TEST_HEADER include/spdk/sock.h 00:10:19.063 TEST_HEADER include/spdk/stdinc.h 00:10:19.063 TEST_HEADER include/spdk/string.h 00:10:19.063 TEST_HEADER include/spdk/thread.h 00:10:19.063 TEST_HEADER include/spdk/trace.h 00:10:19.063 TEST_HEADER include/spdk/trace_parser.h 00:10:19.063 TEST_HEADER include/spdk/tree.h 00:10:19.063 TEST_HEADER include/spdk/ublk.h 00:10:19.063 TEST_HEADER include/spdk/util.h 00:10:19.063 TEST_HEADER include/spdk/uuid.h 00:10:19.063 TEST_HEADER include/spdk/version.h 00:10:19.063 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:19.063 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:19.063 TEST_HEADER include/spdk/vhost.h 00:10:19.063 TEST_HEADER include/spdk/vmd.h 00:10:19.063 TEST_HEADER include/spdk/xor.h 00:10:19.063 TEST_HEADER include/spdk/zipf.h 00:10:19.063 CXX test/cpp_headers/accel.o 00:10:19.322 LINK bdev_svc 00:10:19.322 LINK mkfs 00:10:19.322 LINK hello_blob 00:10:19.581 CXX test/cpp_headers/accel_module.o 00:10:19.581 LINK hello_bdev 00:10:19.581 LINK dif 00:10:19.581 LINK accel_perf 00:10:19.581 LINK bdevio 00:10:19.581 LINK test_dma 00:10:19.581 CXX test/cpp_headers/assert.o 00:10:19.581 LINK spdk_trace 00:10:19.846 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:19.846 CXX test/cpp_headers/barrier.o 00:10:19.846 CC examples/blob/cli/blobcli.o 00:10:19.846 CC examples/bdev/bdevperf/bdevperf.o 00:10:19.846 CC test/app/histogram_perf/histogram_perf.o 00:10:19.846 CC test/app/jsoncat/jsoncat.o 00:10:19.846 CC examples/ioat/perf/perf.o 00:10:19.846 CC app/trace_record/trace_record.o 00:10:19.846 CXX test/cpp_headers/base64.o 00:10:19.846 CC examples/ioat/verify/verify.o 00:10:20.109 CC examples/nvme/hello_world/hello_world.o 00:10:20.109 LINK jsoncat 00:10:20.109 LINK histogram_perf 00:10:20.109 CXX test/cpp_headers/bdev.o 00:10:20.109 LINK ioat_perf 00:10:20.109 LINK spdk_trace_record 00:10:20.109 LINK nvme_fuzz 00:10:20.109 LINK verify 00:10:20.109 CXX test/cpp_headers/bdev_module.o 00:10:20.109 CXX test/cpp_headers/bdev_zone.o 00:10:20.368 LINK hello_world 00:10:20.368 LINK blobcli 00:10:20.368 CXX test/cpp_headers/bit_array.o 00:10:20.368 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:20.368 CC app/nvmf_tgt/nvmf_main.o 00:10:20.368 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:20.626 CC test/event/event_perf/event_perf.o 00:10:20.626 CC examples/nvme/reconnect/reconnect.o 00:10:20.626 CC test/env/mem_callbacks/mem_callbacks.o 00:10:20.626 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:20.626 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:20.626 CXX test/cpp_headers/bit_pool.o 00:10:20.626 LINK nvmf_tgt 00:10:20.626 CC test/lvol/esnap/esnap.o 00:10:20.626 LINK bdevperf 00:10:20.626 LINK event_perf 00:10:20.885 CXX test/cpp_headers/blob_bdev.o 00:10:20.885 LINK reconnect 00:10:20.885 CC test/event/reactor/reactor.o 00:10:20.885 CC test/event/reactor_perf/reactor_perf.o 00:10:21.143 CC app/iscsi_tgt/iscsi_tgt.o 00:10:21.143 LINK vhost_fuzz 00:10:21.143 CXX test/cpp_headers/blobfs_bdev.o 00:10:21.143 LINK reactor 00:10:21.143 LINK reactor_perf 00:10:21.143 LINK nvme_manage 00:10:21.143 LINK iscsi_tgt 00:10:21.402 LINK mem_callbacks 00:10:21.402 CC test/nvme/aer/aer.o 00:10:21.402 CC test/rpc_client/rpc_client_test.o 00:10:21.402 CXX test/cpp_headers/blobfs.o 00:10:21.402 CC test/event/app_repeat/app_repeat.o 00:10:21.402 CC examples/nvme/arbitration/arbitration.o 00:10:21.402 CC test/thread/poller_perf/poller_perf.o 00:10:21.402 CC test/env/vtophys/vtophys.o 00:10:21.660 CXX test/cpp_headers/blob.o 00:10:21.660 LINK rpc_client_test 00:10:21.660 LINK app_repeat 00:10:21.660 CC app/spdk_tgt/spdk_tgt.o 00:10:21.660 LINK poller_perf 00:10:21.660 LINK vtophys 00:10:21.660 LINK aer 00:10:21.660 CXX test/cpp_headers/conf.o 00:10:21.918 LINK arbitration 00:10:21.918 LINK spdk_tgt 00:10:21.918 CC examples/nvme/hotplug/hotplug.o 00:10:21.918 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:21.918 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:21.918 CXX test/cpp_headers/config.o 00:10:21.918 CXX test/cpp_headers/cpuset.o 00:10:21.918 CXX test/cpp_headers/crc16.o 00:10:21.918 CC test/event/scheduler/scheduler.o 00:10:21.918 CC test/nvme/reset/reset.o 00:10:22.176 LINK env_dpdk_post_init 00:10:22.176 LINK cmb_copy 00:10:22.176 CXX test/cpp_headers/crc32.o 00:10:22.176 CC app/spdk_lspci/spdk_lspci.o 00:10:22.176 LINK iscsi_fuzz 00:10:22.176 CC examples/nvme/abort/abort.o 00:10:22.176 LINK hotplug 00:10:22.176 LINK scheduler 00:10:22.434 LINK spdk_lspci 00:10:22.434 LINK reset 00:10:22.434 CXX test/cpp_headers/crc64.o 00:10:22.434 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:22.434 CC test/env/memory/memory_ut.o 00:10:22.434 CXX test/cpp_headers/dif.o 00:10:22.434 CC test/app/stub/stub.o 00:10:22.434 CXX test/cpp_headers/dma.o 00:10:22.692 CC app/spdk_nvme_perf/perf.o 00:10:22.692 LINK pmr_persistence 00:10:22.692 CC test/nvme/sgl/sgl.o 00:10:22.692 LINK abort 00:10:22.692 CC test/env/pci/pci_ut.o 00:10:22.692 CXX test/cpp_headers/endian.o 00:10:22.692 LINK stub 00:10:22.692 CC test/nvme/e2edp/nvme_dp.o 00:10:22.950 CXX test/cpp_headers/env_dpdk.o 00:10:22.950 CC test/nvme/overhead/overhead.o 00:10:22.950 LINK sgl 00:10:22.950 CC test/nvme/err_injection/err_injection.o 00:10:22.950 CC examples/sock/hello_world/hello_sock.o 00:10:22.950 LINK nvme_dp 00:10:22.950 LINK pci_ut 00:10:22.950 CXX test/cpp_headers/env.o 00:10:23.209 CC test/nvme/startup/startup.o 00:10:23.209 LINK overhead 00:10:23.209 LINK err_injection 00:10:23.209 CXX test/cpp_headers/event.o 00:10:23.209 LINK hello_sock 00:10:23.467 CC test/nvme/reserve/reserve.o 00:10:23.467 LINK startup 00:10:23.467 LINK memory_ut 00:10:23.467 CXX test/cpp_headers/fd_group.o 00:10:23.467 CC app/spdk_nvme_identify/identify.o 00:10:23.467 CC test/nvme/simple_copy/simple_copy.o 00:10:23.467 CC test/nvme/connect_stress/connect_stress.o 00:10:23.467 LINK spdk_nvme_perf 00:10:23.725 LINK reserve 00:10:23.725 CXX test/cpp_headers/fd.o 00:10:23.725 CC examples/vmd/lsvmd/lsvmd.o 00:10:23.725 CC test/nvme/boot_partition/boot_partition.o 00:10:23.725 LINK connect_stress 00:10:23.725 CC examples/vmd/led/led.o 00:10:23.725 LINK simple_copy 00:10:23.725 CXX test/cpp_headers/file.o 00:10:23.725 LINK lsvmd 00:10:23.983 LINK boot_partition 00:10:23.983 LINK led 00:10:23.983 CXX test/cpp_headers/ftl.o 00:10:23.983 CC test/nvme/compliance/nvme_compliance.o 00:10:23.983 CC examples/nvmf/nvmf/nvmf.o 00:10:24.241 CC test/nvme/fused_ordering/fused_ordering.o 00:10:24.241 CC examples/util/zipf/zipf.o 00:10:24.241 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:24.241 CC test/nvme/fdp/fdp.o 00:10:24.241 CC test/nvme/cuse/cuse.o 00:10:24.499 LINK zipf 00:10:24.499 CXX test/cpp_headers/gpt_spec.o 00:10:24.499 LINK fused_ordering 00:10:24.499 LINK doorbell_aers 00:10:24.499 LINK nvmf 00:10:24.499 LINK spdk_nvme_identify 00:10:24.499 LINK nvme_compliance 00:10:24.499 CXX test/cpp_headers/hexlify.o 00:10:24.499 LINK fdp 00:10:24.758 CC app/spdk_nvme_discover/discovery_aer.o 00:10:24.758 CC app/spdk_top/spdk_top.o 00:10:24.758 CXX test/cpp_headers/histogram_data.o 00:10:24.758 LINK spdk_nvme_discover 00:10:25.016 CC app/spdk_dd/spdk_dd.o 00:10:25.016 CXX test/cpp_headers/idxd.o 00:10:25.016 CC app/vhost/vhost.o 00:10:25.016 CC app/fio/nvme/fio_plugin.o 00:10:25.016 CC examples/thread/thread/thread_ex.o 00:10:25.016 CC examples/idxd/perf/perf.o 00:10:25.016 CXX test/cpp_headers/idxd_spec.o 00:10:25.016 LINK vhost 00:10:25.275 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:25.275 LINK thread 00:10:25.275 CXX test/cpp_headers/init.o 00:10:25.275 CXX test/cpp_headers/ioat.o 00:10:25.275 LINK idxd_perf 00:10:25.275 LINK spdk_dd 00:10:25.275 LINK interrupt_tgt 00:10:25.533 CXX test/cpp_headers/ioat_spec.o 00:10:25.533 LINK spdk_nvme 00:10:25.533 CXX test/cpp_headers/iscsi_spec.o 00:10:25.533 LINK esnap 00:10:25.533 CXX test/cpp_headers/json.o 00:10:25.533 LINK spdk_top 00:10:25.533 LINK cuse 00:10:25.533 CXX test/cpp_headers/jsonrpc.o 00:10:25.533 CXX test/cpp_headers/keyring.o 00:10:25.533 CXX test/cpp_headers/keyring_module.o 00:10:25.533 CC app/fio/bdev/fio_plugin.o 00:10:25.792 CXX test/cpp_headers/likely.o 00:10:25.792 CXX test/cpp_headers/log.o 00:10:25.792 CXX test/cpp_headers/memory.o 00:10:25.792 CXX test/cpp_headers/lvol.o 00:10:25.792 CXX test/cpp_headers/mmio.o 00:10:25.792 CXX test/cpp_headers/nbd.o 00:10:25.792 CXX test/cpp_headers/notify.o 00:10:25.792 CXX test/cpp_headers/nvme.o 00:10:25.792 CXX test/cpp_headers/nvme_intel.o 00:10:25.792 CXX test/cpp_headers/nvme_ocssd.o 00:10:25.792 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:25.792 CXX test/cpp_headers/nvme_spec.o 00:10:25.792 CXX test/cpp_headers/nvme_zns.o 00:10:26.049 CXX test/cpp_headers/nvmf_cmd.o 00:10:26.049 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:26.049 CXX test/cpp_headers/nvmf.o 00:10:26.049 CXX test/cpp_headers/nvmf_spec.o 00:10:26.049 CXX test/cpp_headers/nvmf_transport.o 00:10:26.049 CXX test/cpp_headers/opal.o 00:10:26.049 CXX test/cpp_headers/opal_spec.o 00:10:26.049 CXX test/cpp_headers/pci_ids.o 00:10:26.049 CXX test/cpp_headers/pipe.o 00:10:26.049 LINK spdk_bdev 00:10:26.049 CXX test/cpp_headers/queue.o 00:10:26.308 CXX test/cpp_headers/reduce.o 00:10:26.308 CXX test/cpp_headers/rpc.o 00:10:26.308 CXX test/cpp_headers/scheduler.o 00:10:26.308 CXX test/cpp_headers/scsi.o 00:10:26.308 CXX test/cpp_headers/scsi_spec.o 00:10:26.308 CXX test/cpp_headers/sock.o 00:10:26.308 CXX test/cpp_headers/stdinc.o 00:10:26.308 CXX test/cpp_headers/string.o 00:10:26.308 CXX test/cpp_headers/thread.o 00:10:26.308 CXX test/cpp_headers/trace.o 00:10:26.308 CXX test/cpp_headers/trace_parser.o 00:10:26.308 CXX test/cpp_headers/tree.o 00:10:26.308 CXX test/cpp_headers/ublk.o 00:10:26.308 CXX test/cpp_headers/util.o 00:10:26.568 CXX test/cpp_headers/uuid.o 00:10:26.568 CXX test/cpp_headers/version.o 00:10:26.568 CXX test/cpp_headers/vfio_user_pci.o 00:10:26.568 CXX test/cpp_headers/vfio_user_spec.o 00:10:26.568 CXX test/cpp_headers/vhost.o 00:10:26.568 CXX test/cpp_headers/vmd.o 00:10:26.568 CXX test/cpp_headers/xor.o 00:10:26.568 CXX test/cpp_headers/zipf.o 00:10:26.827 00:10:26.827 real 1m4.399s 00:10:26.827 user 6m58.126s 00:10:26.827 sys 1m31.911s 00:10:26.827 14:29:35 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:10:26.827 14:29:35 -- common/autotest_common.sh@10 -- $ set +x 00:10:26.827 ************************************ 00:10:26.827 END TEST make 00:10:26.827 ************************************ 00:10:26.827 14:29:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:26.827 14:29:35 -- pm/common@30 -- $ signal_monitor_resources TERM 00:10:26.827 14:29:35 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:10:26.828 14:29:35 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:26.828 14:29:35 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:26.828 14:29:35 -- pm/common@45 -- $ pid=5221 00:10:26.828 14:29:35 -- pm/common@52 -- $ sudo kill -TERM 5221 00:10:26.828 14:29:35 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:26.828 14:29:35 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:26.828 14:29:35 -- pm/common@45 -- $ pid=5220 00:10:26.828 14:29:35 -- pm/common@52 -- $ sudo kill -TERM 5220 00:10:26.828 14:29:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:26.828 14:29:35 -- nvmf/common.sh@7 -- # uname -s 00:10:26.828 14:29:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.828 14:29:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.828 14:29:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.828 14:29:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.828 14:29:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.828 14:29:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.828 14:29:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.828 14:29:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.828 14:29:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.828 14:29:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.828 14:29:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:10:26.828 14:29:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:10:26.828 14:29:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.828 14:29:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.828 14:29:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:26.828 14:29:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.828 14:29:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:26.828 14:29:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.828 14:29:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.828 14:29:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.828 14:29:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.828 14:29:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.828 14:29:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.828 14:29:35 -- paths/export.sh@5 -- # export PATH 00:10:26.828 14:29:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.828 14:29:35 -- nvmf/common.sh@47 -- # : 0 00:10:26.828 14:29:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.828 14:29:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.828 14:29:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.828 14:29:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.828 14:29:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.828 14:29:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.828 14:29:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.828 14:29:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.828 14:29:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:26.828 14:29:35 -- spdk/autotest.sh@32 -- # uname -s 00:10:26.828 14:29:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:26.828 14:29:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:26.828 14:29:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:26.828 14:29:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:26.828 14:29:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:26.828 14:29:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:27.087 14:29:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:27.087 14:29:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:27.087 14:29:35 -- spdk/autotest.sh@48 -- # udevadm_pid=52195 00:10:27.087 14:29:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:27.087 14:29:35 -- pm/common@17 -- # local monitor 00:10:27.087 14:29:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:27.087 14:29:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:27.087 14:29:35 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52198 00:10:27.087 14:29:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:27.087 14:29:35 -- pm/common@21 -- # date +%s 00:10:27.087 14:29:35 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52201 00:10:27.087 14:29:35 -- pm/common@26 -- # sleep 1 00:10:27.087 14:29:35 -- pm/common@21 -- # date +%s 00:10:27.087 14:29:35 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713364175 00:10:27.087 14:29:35 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713364175 00:10:27.087 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713364175_collect-vmstat.pm.log 00:10:27.087 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713364175_collect-cpu-load.pm.log 00:10:28.026 14:29:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:28.026 14:29:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:28.026 14:29:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:28.026 14:29:36 -- common/autotest_common.sh@10 -- # set +x 00:10:28.026 14:29:36 -- spdk/autotest.sh@59 -- # create_test_list 00:10:28.026 14:29:36 -- common/autotest_common.sh@734 -- # xtrace_disable 00:10:28.026 14:29:36 -- common/autotest_common.sh@10 -- # set +x 00:10:28.026 14:29:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:28.026 14:29:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:28.026 14:29:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:28.026 14:29:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:28.026 14:29:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:28.026 14:29:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:28.026 14:29:36 -- common/autotest_common.sh@1441 -- # uname 00:10:28.026 14:29:36 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:10:28.026 14:29:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:28.026 14:29:36 -- common/autotest_common.sh@1461 -- # uname 00:10:28.026 14:29:36 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:10:28.026 14:29:36 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:10:28.026 14:29:36 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:10:28.026 14:29:36 -- spdk/autotest.sh@72 -- # hash lcov 00:10:28.026 14:29:36 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:10:28.026 14:29:36 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:10:28.026 --rc lcov_branch_coverage=1 00:10:28.026 --rc lcov_function_coverage=1 00:10:28.026 --rc genhtml_branch_coverage=1 00:10:28.026 --rc genhtml_function_coverage=1 00:10:28.026 --rc genhtml_legend=1 00:10:28.026 --rc geninfo_all_blocks=1 00:10:28.026 ' 00:10:28.026 14:29:36 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:10:28.026 --rc lcov_branch_coverage=1 00:10:28.026 --rc lcov_function_coverage=1 00:10:28.026 --rc genhtml_branch_coverage=1 00:10:28.026 --rc genhtml_function_coverage=1 00:10:28.026 --rc genhtml_legend=1 00:10:28.026 --rc geninfo_all_blocks=1 00:10:28.026 ' 00:10:28.027 14:29:36 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:10:28.027 --rc lcov_branch_coverage=1 00:10:28.027 --rc lcov_function_coverage=1 00:10:28.027 --rc genhtml_branch_coverage=1 00:10:28.027 --rc genhtml_function_coverage=1 00:10:28.027 --rc genhtml_legend=1 00:10:28.027 --rc geninfo_all_blocks=1 00:10:28.027 --no-external' 00:10:28.027 14:29:36 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:10:28.027 --rc lcov_branch_coverage=1 00:10:28.027 --rc lcov_function_coverage=1 00:10:28.027 --rc genhtml_branch_coverage=1 00:10:28.027 --rc genhtml_function_coverage=1 00:10:28.027 --rc genhtml_legend=1 00:10:28.027 --rc geninfo_all_blocks=1 00:10:28.027 --no-external' 00:10:28.027 14:29:36 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:10:28.027 lcov: LCOV version 1.14 00:10:28.027 14:29:36 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:38.001 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:10:38.001 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:10:38.001 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:10:38.001 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:10:38.001 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:10:38.001 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:10:44.559 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:44.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:10:59.441 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:10:59.441 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:10:59.442 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:10:59.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:11:02.729 14:30:10 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:11:02.729 14:30:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:02.729 14:30:10 -- common/autotest_common.sh@10 -- # set +x 00:11:02.729 14:30:10 -- spdk/autotest.sh@91 -- # rm -f 00:11:02.729 14:30:10 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:02.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:02.988 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:02.988 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:02.988 14:30:11 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:11:02.988 14:30:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:11:02.988 14:30:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:11:02.988 14:30:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:11:02.988 14:30:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:02.988 14:30:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:11:02.988 14:30:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:11:02.988 14:30:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:02.988 14:30:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:02.988 14:30:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:02.988 14:30:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:11:02.988 14:30:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:11:02.988 14:30:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:02.988 14:30:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:02.988 14:30:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:02.988 14:30:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:11:02.989 14:30:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:11:02.989 14:30:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:02.989 14:30:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:02.989 14:30:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:02.989 14:30:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:11:02.989 14:30:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:11:02.989 14:30:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:02.989 14:30:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:02.989 14:30:11 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:11:02.989 14:30:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:02.989 14:30:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:02.989 14:30:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:11:02.989 14:30:11 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:11:02.989 14:30:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:03.247 No valid GPT data, bailing 00:11:03.247 14:30:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:03.247 14:30:11 -- scripts/common.sh@391 -- # pt= 00:11:03.247 14:30:11 -- scripts/common.sh@392 -- # return 1 00:11:03.247 14:30:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:03.247 1+0 records in 00:11:03.247 1+0 records out 00:11:03.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543155 s, 193 MB/s 00:11:03.247 14:30:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:03.247 14:30:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:03.247 14:30:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:11:03.247 14:30:11 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:11:03.247 14:30:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:03.248 No valid GPT data, bailing 00:11:03.248 14:30:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:03.248 14:30:11 -- scripts/common.sh@391 -- # pt= 00:11:03.248 14:30:11 -- scripts/common.sh@392 -- # return 1 00:11:03.248 14:30:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:03.248 1+0 records in 00:11:03.248 1+0 records out 00:11:03.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414622 s, 253 MB/s 00:11:03.248 14:30:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:03.248 14:30:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:03.248 14:30:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:11:03.248 14:30:11 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:11:03.248 14:30:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:11:03.248 No valid GPT data, bailing 00:11:03.248 14:30:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:11:03.248 14:30:11 -- scripts/common.sh@391 -- # pt= 00:11:03.248 14:30:11 -- scripts/common.sh@392 -- # return 1 00:11:03.248 14:30:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:11:03.248 1+0 records in 00:11:03.248 1+0 records out 00:11:03.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415004 s, 253 MB/s 00:11:03.248 14:30:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:03.248 14:30:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:03.248 14:30:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:11:03.248 14:30:11 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:11:03.248 14:30:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:11:03.506 No valid GPT data, bailing 00:11:03.506 14:30:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:11:03.506 14:30:11 -- scripts/common.sh@391 -- # pt= 00:11:03.506 14:30:11 -- scripts/common.sh@392 -- # return 1 00:11:03.506 14:30:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:11:03.506 1+0 records in 00:11:03.506 1+0 records out 00:11:03.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00318184 s, 330 MB/s 00:11:03.506 14:30:11 -- spdk/autotest.sh@118 -- # sync 00:11:03.506 14:30:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:03.506 14:30:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:03.506 14:30:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:05.416 14:30:13 -- spdk/autotest.sh@124 -- # uname -s 00:11:05.416 14:30:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:11:05.416 14:30:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:05.416 14:30:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:05.416 14:30:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.416 14:30:13 -- common/autotest_common.sh@10 -- # set +x 00:11:05.416 ************************************ 00:11:05.416 START TEST setup.sh 00:11:05.416 ************************************ 00:11:05.416 14:30:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:05.416 * Looking for test storage... 00:11:05.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:05.416 14:30:13 -- setup/test-setup.sh@10 -- # uname -s 00:11:05.416 14:30:13 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:11:05.416 14:30:13 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:05.416 14:30:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:05.416 14:30:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.416 14:30:13 -- common/autotest_common.sh@10 -- # set +x 00:11:05.416 ************************************ 00:11:05.416 START TEST acl 00:11:05.416 ************************************ 00:11:05.416 14:30:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:05.674 * Looking for test storage... 00:11:05.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:05.674 14:30:14 -- setup/acl.sh@10 -- # get_zoned_devs 00:11:05.674 14:30:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:11:05.674 14:30:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:11:05.674 14:30:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:11:05.674 14:30:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:05.674 14:30:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:11:05.674 14:30:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:11:05.674 14:30:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:05.674 14:30:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:05.674 14:30:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:05.674 14:30:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:11:05.674 14:30:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:11:05.674 14:30:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:05.675 14:30:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:05.675 14:30:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:05.675 14:30:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:11:05.675 14:30:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:11:05.675 14:30:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:05.675 14:30:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:05.675 14:30:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:05.675 14:30:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:11:05.675 14:30:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:11:05.675 14:30:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:05.675 14:30:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:05.675 14:30:14 -- setup/acl.sh@12 -- # devs=() 00:11:05.675 14:30:14 -- setup/acl.sh@12 -- # declare -a devs 00:11:05.675 14:30:14 -- setup/acl.sh@13 -- # drivers=() 00:11:05.675 14:30:14 -- setup/acl.sh@13 -- # declare -A drivers 00:11:05.675 14:30:14 -- setup/acl.sh@51 -- # setup reset 00:11:05.675 14:30:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:05.675 14:30:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:06.242 14:30:14 -- setup/acl.sh@52 -- # collect_setup_devs 00:11:06.242 14:30:14 -- setup/acl.sh@16 -- # local dev driver 00:11:06.242 14:30:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:06.242 14:30:14 -- setup/acl.sh@15 -- # setup output status 00:11:06.242 14:30:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:06.242 14:30:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # continue 00:11:07.179 14:30:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:07.179 Hugepages 00:11:07.179 node hugesize free / total 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # continue 00:11:07.179 14:30:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:07.179 00:11:07.179 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # continue 00:11:07.179 14:30:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:11:07.179 14:30:15 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:11:07.179 14:30:15 -- setup/acl.sh@20 -- # continue 00:11:07.179 14:30:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:11:07.179 14:30:15 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:07.179 14:30:15 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:07.179 14:30:15 -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:07.179 14:30:15 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:07.179 14:30:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:07.179 14:30:15 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:11:07.179 14:30:15 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:07.179 14:30:15 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:07.179 14:30:15 -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:07.179 14:30:15 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:07.179 14:30:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:07.179 14:30:15 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:11:07.179 14:30:15 -- setup/acl.sh@54 -- # run_test denied denied 00:11:07.179 14:30:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:07.179 14:30:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.179 14:30:15 -- common/autotest_common.sh@10 -- # set +x 00:11:07.179 ************************************ 00:11:07.179 START TEST denied 00:11:07.179 ************************************ 00:11:07.179 14:30:15 -- common/autotest_common.sh@1111 -- # denied 00:11:07.179 14:30:15 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:11:07.179 14:30:15 -- setup/acl.sh@38 -- # setup output config 00:11:07.179 14:30:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:07.179 14:30:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:07.179 14:30:15 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:11:08.142 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:11:08.142 14:30:16 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:11:08.142 14:30:16 -- setup/acl.sh@28 -- # local dev driver 00:11:08.142 14:30:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:11:08.142 14:30:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:11:08.142 14:30:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:11:08.142 14:30:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:08.142 14:30:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:08.142 14:30:16 -- setup/acl.sh@41 -- # setup reset 00:11:08.142 14:30:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:08.142 14:30:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:08.723 00:11:08.723 real 0m1.410s 00:11:08.723 user 0m0.537s 00:11:08.723 sys 0m0.820s 00:11:08.723 14:30:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:08.723 14:30:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.723 ************************************ 00:11:08.723 END TEST denied 00:11:08.723 ************************************ 00:11:08.723 14:30:17 -- setup/acl.sh@55 -- # run_test allowed allowed 00:11:08.723 14:30:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:08.723 14:30:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.723 14:30:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.723 ************************************ 00:11:08.723 START TEST allowed 00:11:08.723 ************************************ 00:11:08.723 14:30:17 -- common/autotest_common.sh@1111 -- # allowed 00:11:08.723 14:30:17 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:11:08.723 14:30:17 -- setup/acl.sh@45 -- # setup output config 00:11:08.723 14:30:17 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:11:08.723 14:30:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:08.723 14:30:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:09.659 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.659 14:30:18 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:11:09.659 14:30:18 -- setup/acl.sh@28 -- # local dev driver 00:11:09.659 14:30:18 -- setup/acl.sh@30 -- # for dev in "$@" 00:11:09.659 14:30:18 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:11:09.659 14:30:18 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:11:09.659 14:30:18 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:09.659 14:30:18 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:09.659 14:30:18 -- setup/acl.sh@48 -- # setup reset 00:11:09.659 14:30:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:09.659 14:30:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:10.227 00:11:10.227 real 0m1.540s 00:11:10.227 user 0m0.688s 00:11:10.227 sys 0m0.848s 00:11:10.227 14:30:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:10.227 14:30:18 -- common/autotest_common.sh@10 -- # set +x 00:11:10.227 ************************************ 00:11:10.227 END TEST allowed 00:11:10.227 ************************************ 00:11:10.227 00:11:10.227 real 0m4.872s 00:11:10.227 user 0m2.104s 00:11:10.227 sys 0m2.704s 00:11:10.227 14:30:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:10.227 14:30:18 -- common/autotest_common.sh@10 -- # set +x 00:11:10.227 ************************************ 00:11:10.227 END TEST acl 00:11:10.227 ************************************ 00:11:10.487 14:30:18 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:10.487 14:30:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.487 14:30:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.487 14:30:18 -- common/autotest_common.sh@10 -- # set +x 00:11:10.487 ************************************ 00:11:10.487 START TEST hugepages 00:11:10.487 ************************************ 00:11:10.487 14:30:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:10.487 * Looking for test storage... 00:11:10.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:10.487 14:30:19 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:11:10.487 14:30:19 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:11:10.487 14:30:19 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:11:10.487 14:30:19 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:11:10.487 14:30:19 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:11:10.487 14:30:19 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:11:10.487 14:30:19 -- setup/common.sh@17 -- # local get=Hugepagesize 00:11:10.487 14:30:19 -- setup/common.sh@18 -- # local node= 00:11:10.487 14:30:19 -- setup/common.sh@19 -- # local var val 00:11:10.487 14:30:19 -- setup/common.sh@20 -- # local mem_f mem 00:11:10.487 14:30:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:10.487 14:30:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:10.487 14:30:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:10.487 14:30:19 -- setup/common.sh@28 -- # mapfile -t mem 00:11:10.487 14:30:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:10.487 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5592108 kB' 'MemAvailable: 7392636 kB' 'Buffers: 2436 kB' 'Cached: 2013204 kB' 'SwapCached: 0 kB' 'Active: 837808 kB' 'Inactive: 1283612 kB' 'Active(anon): 116268 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1320 kB' 'Writeback: 0 kB' 'AnonPages: 107452 kB' 'Mapped: 51612 kB' 'Shmem: 10488 kB' 'KReclaimable: 64628 kB' 'Slab: 141356 kB' 'SReclaimable: 64628 kB' 'SUnreclaim: 76728 kB' 'KernelStack: 6540 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 340972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.488 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.488 14:30:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # continue 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # IFS=': ' 00:11:10.489 14:30:19 -- setup/common.sh@31 -- # read -r var val _ 00:11:10.489 14:30:19 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:10.489 14:30:19 -- setup/common.sh@33 -- # echo 2048 00:11:10.489 14:30:19 -- setup/common.sh@33 -- # return 0 00:11:10.489 14:30:19 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:11:10.489 14:30:19 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:11:10.489 14:30:19 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:11:10.489 14:30:19 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:11:10.489 14:30:19 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:11:10.489 14:30:19 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:11:10.489 14:30:19 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:11:10.489 14:30:19 -- setup/hugepages.sh@207 -- # get_nodes 00:11:10.489 14:30:19 -- setup/hugepages.sh@27 -- # local node 00:11:10.489 14:30:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:10.489 14:30:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:11:10.489 14:30:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:10.489 14:30:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:10.489 14:30:19 -- setup/hugepages.sh@208 -- # clear_hp 00:11:10.489 14:30:19 -- setup/hugepages.sh@37 -- # local node hp 00:11:10.489 14:30:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:10.489 14:30:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:10.489 14:30:19 -- setup/hugepages.sh@41 -- # echo 0 00:11:10.489 14:30:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:10.489 14:30:19 -- setup/hugepages.sh@41 -- # echo 0 00:11:10.489 14:30:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:10.489 14:30:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:10.489 14:30:19 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:11:10.489 14:30:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.489 14:30:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.489 14:30:19 -- common/autotest_common.sh@10 -- # set +x 00:11:10.748 ************************************ 00:11:10.748 START TEST default_setup 00:11:10.748 ************************************ 00:11:10.748 14:30:19 -- common/autotest_common.sh@1111 -- # default_setup 00:11:10.748 14:30:19 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:11:10.748 14:30:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:11:10.748 14:30:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:10.748 14:30:19 -- setup/hugepages.sh@51 -- # shift 00:11:10.748 14:30:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:10.748 14:30:19 -- setup/hugepages.sh@52 -- # local node_ids 00:11:10.748 14:30:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:10.748 14:30:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:10.748 14:30:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:10.748 14:30:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:10.748 14:30:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:11:10.748 14:30:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:10.748 14:30:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:10.748 14:30:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:10.748 14:30:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:10.748 14:30:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:10.748 14:30:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:10.748 14:30:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:10.748 14:30:19 -- setup/hugepages.sh@73 -- # return 0 00:11:10.748 14:30:19 -- setup/hugepages.sh@137 -- # setup output 00:11:10.748 14:30:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:10.748 14:30:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:11.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.575 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.575 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.575 14:30:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:11:11.575 14:30:20 -- setup/hugepages.sh@89 -- # local node 00:11:11.575 14:30:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:11:11.575 14:30:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:11:11.575 14:30:20 -- setup/hugepages.sh@92 -- # local surp 00:11:11.575 14:30:20 -- setup/hugepages.sh@93 -- # local resv 00:11:11.575 14:30:20 -- setup/hugepages.sh@94 -- # local anon 00:11:11.575 14:30:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:11.575 14:30:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:11.575 14:30:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:11.575 14:30:20 -- setup/common.sh@18 -- # local node= 00:11:11.575 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:11.575 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:11.575 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:11.575 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:11.575 14:30:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:11.575 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:11.575 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:11.575 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.575 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7691568 kB' 'MemAvailable: 9491964 kB' 'Buffers: 2436 kB' 'Cached: 2013216 kB' 'SwapCached: 0 kB' 'Active: 854372 kB' 'Inactive: 1283636 kB' 'Active(anon): 132832 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 124220 kB' 'Mapped: 51692 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140868 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76552 kB' 'KernelStack: 6544 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.576 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.576 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.577 14:30:20 -- setup/common.sh@33 -- # echo 0 00:11:11.577 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:11.577 14:30:20 -- setup/hugepages.sh@97 -- # anon=0 00:11:11.577 14:30:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:11.577 14:30:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:11.577 14:30:20 -- setup/common.sh@18 -- # local node= 00:11:11.577 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:11.577 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:11.577 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:11.577 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:11.577 14:30:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:11.577 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:11.577 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7691320 kB' 'MemAvailable: 9491716 kB' 'Buffers: 2436 kB' 'Cached: 2013216 kB' 'SwapCached: 0 kB' 'Active: 854176 kB' 'Inactive: 1283636 kB' 'Active(anon): 132636 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 124060 kB' 'Mapped: 51692 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140868 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76552 kB' 'KernelStack: 6544 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.577 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.577 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.578 14:30:20 -- setup/common.sh@33 -- # echo 0 00:11:11.578 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:11.578 14:30:20 -- setup/hugepages.sh@99 -- # surp=0 00:11:11.578 14:30:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:11.578 14:30:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:11.578 14:30:20 -- setup/common.sh@18 -- # local node= 00:11:11.578 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:11.578 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:11.578 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:11.578 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:11.578 14:30:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:11.578 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:11.578 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7691672 kB' 'MemAvailable: 9492076 kB' 'Buffers: 2436 kB' 'Cached: 2013216 kB' 'SwapCached: 0 kB' 'Active: 854008 kB' 'Inactive: 1283644 kB' 'Active(anon): 132468 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123632 kB' 'Mapped: 51568 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140864 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76548 kB' 'KernelStack: 6560 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.578 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.578 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.579 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.579 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.579 14:30:20 -- setup/common.sh@33 -- # echo 0 00:11:11.579 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:11.579 14:30:20 -- setup/hugepages.sh@100 -- # resv=0 00:11:11.579 nr_hugepages=1024 00:11:11.579 14:30:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:11.579 resv_hugepages=0 00:11:11.579 14:30:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:11.579 surplus_hugepages=0 00:11:11.579 14:30:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:11.579 anon_hugepages=0 00:11:11.579 14:30:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:11.579 14:30:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:11.579 14:30:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:11.579 14:30:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:11.580 14:30:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:11.580 14:30:20 -- setup/common.sh@18 -- # local node= 00:11:11.580 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:11.580 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:11.580 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:11.580 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:11.580 14:30:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:11.580 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:11.580 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7691932 kB' 'MemAvailable: 9492336 kB' 'Buffers: 2436 kB' 'Cached: 2013216 kB' 'SwapCached: 0 kB' 'Active: 854268 kB' 'Inactive: 1283644 kB' 'Active(anon): 132728 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123892 kB' 'Mapped: 51568 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140864 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76548 kB' 'KernelStack: 6560 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.580 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.580 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:11.581 14:30:20 -- setup/common.sh@33 -- # echo 1024 00:11:11.581 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:11.581 14:30:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:11.581 14:30:20 -- setup/hugepages.sh@112 -- # get_nodes 00:11:11.581 14:30:20 -- setup/hugepages.sh@27 -- # local node 00:11:11.581 14:30:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:11.581 14:30:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:11.581 14:30:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:11.581 14:30:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:11.581 14:30:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:11.581 14:30:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:11.581 14:30:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:11.581 14:30:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:11.581 14:30:20 -- setup/common.sh@18 -- # local node=0 00:11:11.581 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:11.581 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:11.581 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:11.581 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:11.581 14:30:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:11.581 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:11.581 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7692140 kB' 'MemUsed: 4549836 kB' 'SwapCached: 0 kB' 'Active: 854308 kB' 'Inactive: 1283644 kB' 'Active(anon): 132768 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2015652 kB' 'Mapped: 51568 kB' 'AnonPages: 123904 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64316 kB' 'Slab: 140864 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.581 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.581 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.582 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.582 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # continue 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:11.841 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:11.841 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.841 14:30:20 -- setup/common.sh@33 -- # echo 0 00:11:11.841 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:11.841 14:30:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:11.841 14:30:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:11.841 14:30:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:11.841 14:30:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:11.841 node0=1024 expecting 1024 00:11:11.841 14:30:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:11.841 14:30:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:11.841 00:11:11.841 real 0m1.031s 00:11:11.841 user 0m0.476s 00:11:11.841 sys 0m0.497s 00:11:11.841 14:30:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:11.841 14:30:20 -- common/autotest_common.sh@10 -- # set +x 00:11:11.841 ************************************ 00:11:11.841 END TEST default_setup 00:11:11.841 ************************************ 00:11:11.841 14:30:20 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:11:11.841 14:30:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:11.841 14:30:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:11.841 14:30:20 -- common/autotest_common.sh@10 -- # set +x 00:11:11.841 ************************************ 00:11:11.841 START TEST per_node_1G_alloc 00:11:11.841 ************************************ 00:11:11.841 14:30:20 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:11:11.841 14:30:20 -- setup/hugepages.sh@143 -- # local IFS=, 00:11:11.841 14:30:20 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:11:11.841 14:30:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:11:11.841 14:30:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:11.841 14:30:20 -- setup/hugepages.sh@51 -- # shift 00:11:11.841 14:30:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:11.841 14:30:20 -- setup/hugepages.sh@52 -- # local node_ids 00:11:11.841 14:30:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:11.841 14:30:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:11.841 14:30:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:11.841 14:30:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:11.841 14:30:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:11:11.841 14:30:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:11.841 14:30:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:11.841 14:30:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:11.841 14:30:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:11.841 14:30:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:11.841 14:30:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:11.841 14:30:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:11.841 14:30:20 -- setup/hugepages.sh@73 -- # return 0 00:11:11.841 14:30:20 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:11:11.841 14:30:20 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:11:11.841 14:30:20 -- setup/hugepages.sh@146 -- # setup output 00:11:11.841 14:30:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:11.841 14:30:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:12.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:12.101 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:12.101 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:12.101 14:30:20 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:11:12.101 14:30:20 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:11:12.101 14:30:20 -- setup/hugepages.sh@89 -- # local node 00:11:12.101 14:30:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:11:12.101 14:30:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:11:12.101 14:30:20 -- setup/hugepages.sh@92 -- # local surp 00:11:12.101 14:30:20 -- setup/hugepages.sh@93 -- # local resv 00:11:12.101 14:30:20 -- setup/hugepages.sh@94 -- # local anon 00:11:12.101 14:30:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:12.101 14:30:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:12.101 14:30:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:12.102 14:30:20 -- setup/common.sh@18 -- # local node= 00:11:12.102 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:12.102 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.102 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.102 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.102 14:30:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.102 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.102 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740496 kB' 'MemAvailable: 10540904 kB' 'Buffers: 2436 kB' 'Cached: 2013216 kB' 'SwapCached: 0 kB' 'Active: 854528 kB' 'Inactive: 1283648 kB' 'Active(anon): 132988 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 123920 kB' 'Mapped: 51700 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140872 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76556 kB' 'KernelStack: 6548 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.102 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.102 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.103 14:30:20 -- setup/common.sh@33 -- # echo 0 00:11:12.103 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:12.103 14:30:20 -- setup/hugepages.sh@97 -- # anon=0 00:11:12.103 14:30:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:12.103 14:30:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:12.103 14:30:20 -- setup/common.sh@18 -- # local node= 00:11:12.103 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:12.103 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.103 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.103 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.103 14:30:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.103 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.103 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740496 kB' 'MemAvailable: 10540904 kB' 'Buffers: 2436 kB' 'Cached: 2013216 kB' 'SwapCached: 0 kB' 'Active: 854212 kB' 'Inactive: 1283648 kB' 'Active(anon): 132672 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 123744 kB' 'Mapped: 51584 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140868 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76552 kB' 'KernelStack: 6544 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.103 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.103 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.104 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.104 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.104 14:30:20 -- setup/common.sh@33 -- # echo 0 00:11:12.104 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:12.365 14:30:20 -- setup/hugepages.sh@99 -- # surp=0 00:11:12.365 14:30:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:12.365 14:30:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:12.365 14:30:20 -- setup/common.sh@18 -- # local node= 00:11:12.365 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:12.365 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.365 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.365 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.365 14:30:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.365 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.365 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.365 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.365 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740496 kB' 'MemAvailable: 10540904 kB' 'Buffers: 2436 kB' 'Cached: 2013216 kB' 'SwapCached: 0 kB' 'Active: 853900 kB' 'Inactive: 1283648 kB' 'Active(anon): 132360 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 123464 kB' 'Mapped: 51584 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140868 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76552 kB' 'KernelStack: 6544 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.366 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.366 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.367 14:30:20 -- setup/common.sh@33 -- # echo 0 00:11:12.367 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:12.367 14:30:20 -- setup/hugepages.sh@100 -- # resv=0 00:11:12.367 14:30:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:12.367 nr_hugepages=512 00:11:12.367 resv_hugepages=0 00:11:12.367 14:30:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:12.367 surplus_hugepages=0 00:11:12.367 14:30:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:12.367 anon_hugepages=0 00:11:12.367 14:30:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:12.367 14:30:20 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:12.367 14:30:20 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:12.367 14:30:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:12.367 14:30:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:12.367 14:30:20 -- setup/common.sh@18 -- # local node= 00:11:12.367 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:12.367 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.367 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.367 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.367 14:30:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.367 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.367 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740760 kB' 'MemAvailable: 10541168 kB' 'Buffers: 2436 kB' 'Cached: 2013216 kB' 'SwapCached: 0 kB' 'Active: 854160 kB' 'Inactive: 1283648 kB' 'Active(anon): 132620 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 123724 kB' 'Mapped: 51584 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140868 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76552 kB' 'KernelStack: 6544 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.367 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.367 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.368 14:30:20 -- setup/common.sh@33 -- # echo 512 00:11:12.368 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:12.368 14:30:20 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:12.368 14:30:20 -- setup/hugepages.sh@112 -- # get_nodes 00:11:12.368 14:30:20 -- setup/hugepages.sh@27 -- # local node 00:11:12.368 14:30:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:12.368 14:30:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:12.368 14:30:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:12.368 14:30:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:12.368 14:30:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:12.368 14:30:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:12.368 14:30:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:12.368 14:30:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:12.368 14:30:20 -- setup/common.sh@18 -- # local node=0 00:11:12.368 14:30:20 -- setup/common.sh@19 -- # local var val 00:11:12.368 14:30:20 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.368 14:30:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.368 14:30:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:12.368 14:30:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:12.368 14:30:20 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.368 14:30:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8741540 kB' 'MemUsed: 3500436 kB' 'SwapCached: 0 kB' 'Active: 854140 kB' 'Inactive: 1283648 kB' 'Active(anon): 132600 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'FilePages: 2015652 kB' 'Mapped: 51584 kB' 'AnonPages: 123712 kB' 'Shmem: 10464 kB' 'KernelStack: 6612 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64316 kB' 'Slab: 140868 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.368 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.368 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # continue 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.369 14:30:20 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.369 14:30:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.369 14:30:20 -- setup/common.sh@33 -- # echo 0 00:11:12.369 14:30:20 -- setup/common.sh@33 -- # return 0 00:11:12.369 14:30:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:12.369 14:30:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:12.369 14:30:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:12.369 14:30:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:12.369 node0=512 expecting 512 00:11:12.369 14:30:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:12.369 14:30:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:12.369 00:11:12.369 real 0m0.502s 00:11:12.369 user 0m0.261s 00:11:12.369 sys 0m0.275s 00:11:12.369 14:30:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.369 14:30:20 -- common/autotest_common.sh@10 -- # set +x 00:11:12.369 ************************************ 00:11:12.369 END TEST per_node_1G_alloc 00:11:12.369 ************************************ 00:11:12.369 14:30:20 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:11:12.369 14:30:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.369 14:30:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.369 14:30:20 -- common/autotest_common.sh@10 -- # set +x 00:11:12.369 ************************************ 00:11:12.369 START TEST even_2G_alloc 00:11:12.369 ************************************ 00:11:12.369 14:30:20 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:11:12.369 14:30:20 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:11:12.369 14:30:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:11:12.369 14:30:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:12.369 14:30:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:12.369 14:30:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:12.369 14:30:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:12.369 14:30:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:12.369 14:30:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:11:12.369 14:30:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:12.370 14:30:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:12.370 14:30:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:12.370 14:30:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:12.370 14:30:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:12.370 14:30:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:12.370 14:30:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:12.370 14:30:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:11:12.370 14:30:20 -- setup/hugepages.sh@83 -- # : 0 00:11:12.370 14:30:20 -- setup/hugepages.sh@84 -- # : 0 00:11:12.370 14:30:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:12.370 14:30:20 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:11:12.370 14:30:20 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:11:12.370 14:30:20 -- setup/hugepages.sh@153 -- # setup output 00:11:12.370 14:30:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:12.370 14:30:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:12.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:12.628 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:12.628 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:12.628 14:30:21 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:11:12.890 14:30:21 -- setup/hugepages.sh@89 -- # local node 00:11:12.890 14:30:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:11:12.890 14:30:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:11:12.890 14:30:21 -- setup/hugepages.sh@92 -- # local surp 00:11:12.890 14:30:21 -- setup/hugepages.sh@93 -- # local resv 00:11:12.890 14:30:21 -- setup/hugepages.sh@94 -- # local anon 00:11:12.890 14:30:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:12.890 14:30:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:12.890 14:30:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:12.890 14:30:21 -- setup/common.sh@18 -- # local node= 00:11:12.890 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:12.890 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.890 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.890 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.890 14:30:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.890 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.890 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7689972 kB' 'MemAvailable: 9490384 kB' 'Buffers: 2436 kB' 'Cached: 2013220 kB' 'SwapCached: 0 kB' 'Active: 854388 kB' 'Inactive: 1283652 kB' 'Active(anon): 132848 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 124216 kB' 'Mapped: 52016 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140896 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76580 kB' 'KernelStack: 6548 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.890 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.890 14:30:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:12.891 14:30:21 -- setup/common.sh@33 -- # echo 0 00:11:12.891 14:30:21 -- setup/common.sh@33 -- # return 0 00:11:12.891 14:30:21 -- setup/hugepages.sh@97 -- # anon=0 00:11:12.891 14:30:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:12.891 14:30:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:12.891 14:30:21 -- setup/common.sh@18 -- # local node= 00:11:12.891 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:12.891 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.891 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.891 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.891 14:30:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.891 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.891 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7689720 kB' 'MemAvailable: 9490132 kB' 'Buffers: 2436 kB' 'Cached: 2013220 kB' 'SwapCached: 0 kB' 'Active: 854200 kB' 'Inactive: 1283652 kB' 'Active(anon): 132660 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 123768 kB' 'Mapped: 51596 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140904 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76588 kB' 'KernelStack: 6544 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.891 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.891 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.892 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.892 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.893 14:30:21 -- setup/common.sh@33 -- # echo 0 00:11:12.893 14:30:21 -- setup/common.sh@33 -- # return 0 00:11:12.893 14:30:21 -- setup/hugepages.sh@99 -- # surp=0 00:11:12.893 14:30:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:12.893 14:30:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:12.893 14:30:21 -- setup/common.sh@18 -- # local node= 00:11:12.893 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:12.893 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.893 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.893 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.893 14:30:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.893 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.893 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.893 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7689720 kB' 'MemAvailable: 9490132 kB' 'Buffers: 2436 kB' 'Cached: 2013220 kB' 'SwapCached: 0 kB' 'Active: 854568 kB' 'Inactive: 1283652 kB' 'Active(anon): 133028 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 124272 kB' 'Mapped: 51596 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140904 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76588 kB' 'KernelStack: 6608 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.893 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.893 14:30:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.894 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.894 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.894 14:30:21 -- setup/common.sh@33 -- # echo 0 00:11:12.894 14:30:21 -- setup/common.sh@33 -- # return 0 00:11:12.894 14:30:21 -- setup/hugepages.sh@100 -- # resv=0 00:11:12.894 14:30:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:12.894 nr_hugepages=1024 00:11:12.894 14:30:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:12.894 resv_hugepages=0 00:11:12.894 14:30:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:12.894 surplus_hugepages=0 00:11:12.894 14:30:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:12.894 anon_hugepages=0 00:11:12.894 14:30:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:12.894 14:30:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:12.894 14:30:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:12.894 14:30:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:12.894 14:30:21 -- setup/common.sh@18 -- # local node= 00:11:12.894 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:12.894 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.895 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.895 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.895 14:30:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.895 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.895 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7689972 kB' 'MemAvailable: 9490384 kB' 'Buffers: 2436 kB' 'Cached: 2013220 kB' 'SwapCached: 0 kB' 'Active: 854200 kB' 'Inactive: 1283652 kB' 'Active(anon): 132660 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 123904 kB' 'Mapped: 51656 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140900 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76584 kB' 'KernelStack: 6528 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.895 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.895 14:30:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.896 14:30:21 -- setup/common.sh@33 -- # echo 1024 00:11:12.896 14:30:21 -- setup/common.sh@33 -- # return 0 00:11:12.896 14:30:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:12.896 14:30:21 -- setup/hugepages.sh@112 -- # get_nodes 00:11:12.896 14:30:21 -- setup/hugepages.sh@27 -- # local node 00:11:12.896 14:30:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:12.896 14:30:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:12.896 14:30:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:12.896 14:30:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:12.896 14:30:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:12.896 14:30:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:12.896 14:30:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:12.896 14:30:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:12.896 14:30:21 -- setup/common.sh@18 -- # local node=0 00:11:12.896 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:12.896 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:12.896 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.896 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:12.896 14:30:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:12.896 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.896 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.896 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.896 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7689972 kB' 'MemUsed: 4552004 kB' 'SwapCached: 0 kB' 'Active: 854216 kB' 'Inactive: 1283652 kB' 'Active(anon): 132676 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'FilePages: 2015656 kB' 'Mapped: 51596 kB' 'AnonPages: 123804 kB' 'Shmem: 10464 kB' 'KernelStack: 6528 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64316 kB' 'Slab: 140900 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.896 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # continue 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:12.897 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:12.897 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.897 14:30:21 -- setup/common.sh@33 -- # echo 0 00:11:12.897 14:30:21 -- setup/common.sh@33 -- # return 0 00:11:12.897 14:30:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:12.897 14:30:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:12.897 14:30:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:12.897 14:30:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:12.897 14:30:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:12.897 node0=1024 expecting 1024 00:11:12.897 14:30:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:12.897 00:11:12.897 real 0m0.487s 00:11:12.897 user 0m0.244s 00:11:12.897 sys 0m0.259s 00:11:12.897 14:30:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.898 14:30:21 -- common/autotest_common.sh@10 -- # set +x 00:11:12.898 ************************************ 00:11:12.898 END TEST even_2G_alloc 00:11:12.898 ************************************ 00:11:12.898 14:30:21 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:11:12.898 14:30:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.898 14:30:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.898 14:30:21 -- common/autotest_common.sh@10 -- # set +x 00:11:13.156 ************************************ 00:11:13.156 START TEST odd_alloc 00:11:13.156 ************************************ 00:11:13.156 14:30:21 -- common/autotest_common.sh@1111 -- # odd_alloc 00:11:13.156 14:30:21 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:11:13.156 14:30:21 -- setup/hugepages.sh@49 -- # local size=2098176 00:11:13.156 14:30:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:13.156 14:30:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:13.156 14:30:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:11:13.156 14:30:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:13.156 14:30:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:13.156 14:30:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:11:13.156 14:30:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:11:13.156 14:30:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:13.156 14:30:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:13.156 14:30:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:13.156 14:30:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:13.156 14:30:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:13.156 14:30:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:13.156 14:30:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:11:13.156 14:30:21 -- setup/hugepages.sh@83 -- # : 0 00:11:13.156 14:30:21 -- setup/hugepages.sh@84 -- # : 0 00:11:13.156 14:30:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:13.156 14:30:21 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:11:13.156 14:30:21 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:11:13.156 14:30:21 -- setup/hugepages.sh@160 -- # setup output 00:11:13.156 14:30:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:13.156 14:30:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:13.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:13.417 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:13.417 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:13.417 14:30:21 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:11:13.417 14:30:21 -- setup/hugepages.sh@89 -- # local node 00:11:13.417 14:30:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:11:13.417 14:30:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:11:13.417 14:30:21 -- setup/hugepages.sh@92 -- # local surp 00:11:13.417 14:30:21 -- setup/hugepages.sh@93 -- # local resv 00:11:13.417 14:30:21 -- setup/hugepages.sh@94 -- # local anon 00:11:13.417 14:30:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:13.417 14:30:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:13.417 14:30:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:13.417 14:30:21 -- setup/common.sh@18 -- # local node= 00:11:13.417 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:13.417 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:13.417 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.417 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.417 14:30:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.417 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.417 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.417 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7686728 kB' 'MemAvailable: 9487176 kB' 'Buffers: 2436 kB' 'Cached: 2013256 kB' 'SwapCached: 0 kB' 'Active: 854640 kB' 'Inactive: 1283688 kB' 'Active(anon): 133100 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'AnonPages: 124232 kB' 'Mapped: 51800 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140972 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76656 kB' 'KernelStack: 6580 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.418 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.418 14:30:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.418 14:30:21 -- setup/common.sh@33 -- # echo 0 00:11:13.418 14:30:21 -- setup/common.sh@33 -- # return 0 00:11:13.418 14:30:21 -- setup/hugepages.sh@97 -- # anon=0 00:11:13.418 14:30:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:13.418 14:30:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:13.418 14:30:21 -- setup/common.sh@18 -- # local node= 00:11:13.418 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:13.418 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:13.419 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.419 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.419 14:30:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.419 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.419 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7688060 kB' 'MemAvailable: 9488508 kB' 'Buffers: 2436 kB' 'Cached: 2013256 kB' 'SwapCached: 0 kB' 'Active: 854080 kB' 'Inactive: 1283688 kB' 'Active(anon): 132540 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'AnonPages: 123680 kB' 'Mapped: 51616 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140968 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76652 kB' 'KernelStack: 6560 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.419 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.419 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.420 14:30:21 -- setup/common.sh@33 -- # echo 0 00:11:13.420 14:30:21 -- setup/common.sh@33 -- # return 0 00:11:13.420 14:30:21 -- setup/hugepages.sh@99 -- # surp=0 00:11:13.420 14:30:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:13.420 14:30:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:13.420 14:30:21 -- setup/common.sh@18 -- # local node= 00:11:13.420 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:13.420 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:13.420 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.420 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.420 14:30:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.420 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.420 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7688572 kB' 'MemAvailable: 9489020 kB' 'Buffers: 2436 kB' 'Cached: 2013256 kB' 'SwapCached: 0 kB' 'Active: 854292 kB' 'Inactive: 1283688 kB' 'Active(anon): 132752 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'AnonPages: 123700 kB' 'Mapped: 51616 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140968 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76652 kB' 'KernelStack: 6560 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.420 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.420 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.421 14:30:21 -- setup/common.sh@33 -- # echo 0 00:11:13.421 14:30:21 -- setup/common.sh@33 -- # return 0 00:11:13.421 nr_hugepages=1025 00:11:13.421 resv_hugepages=0 00:11:13.421 surplus_hugepages=0 00:11:13.421 anon_hugepages=0 00:11:13.421 14:30:21 -- setup/hugepages.sh@100 -- # resv=0 00:11:13.421 14:30:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:11:13.421 14:30:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:13.421 14:30:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:13.421 14:30:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:13.421 14:30:21 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:13.421 14:30:21 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:11:13.421 14:30:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:13.421 14:30:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:13.421 14:30:21 -- setup/common.sh@18 -- # local node= 00:11:13.421 14:30:21 -- setup/common.sh@19 -- # local var val 00:11:13.421 14:30:21 -- setup/common.sh@20 -- # local mem_f mem 00:11:13.421 14:30:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.421 14:30:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.421 14:30:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.421 14:30:21 -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.421 14:30:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7688828 kB' 'MemAvailable: 9489276 kB' 'Buffers: 2436 kB' 'Cached: 2013256 kB' 'SwapCached: 0 kB' 'Active: 854080 kB' 'Inactive: 1283688 kB' 'Active(anon): 132540 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'AnonPages: 123900 kB' 'Mapped: 51616 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140968 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76652 kB' 'KernelStack: 6544 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 357404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.421 14:30:21 -- setup/common.sh@32 -- # continue 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.421 14:30:21 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.422 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.422 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.682 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.682 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.682 14:30:22 -- setup/common.sh@33 -- # echo 1025 00:11:13.682 14:30:22 -- setup/common.sh@33 -- # return 0 00:11:13.682 14:30:22 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:13.682 14:30:22 -- setup/hugepages.sh@112 -- # get_nodes 00:11:13.682 14:30:22 -- setup/hugepages.sh@27 -- # local node 00:11:13.682 14:30:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:13.682 14:30:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:11:13.682 14:30:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:13.682 14:30:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:13.682 14:30:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:13.682 14:30:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:13.683 14:30:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:13.683 14:30:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:13.683 14:30:22 -- setup/common.sh@18 -- # local node=0 00:11:13.683 14:30:22 -- setup/common.sh@19 -- # local var val 00:11:13.683 14:30:22 -- setup/common.sh@20 -- # local mem_f mem 00:11:13.683 14:30:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.683 14:30:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:13.683 14:30:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:13.683 14:30:22 -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.683 14:30:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7688828 kB' 'MemUsed: 4553148 kB' 'SwapCached: 0 kB' 'Active: 854412 kB' 'Inactive: 1283688 kB' 'Active(anon): 132872 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 980 kB' 'Writeback: 0 kB' 'FilePages: 2015692 kB' 'Mapped: 51616 kB' 'AnonPages: 123948 kB' 'Shmem: 10464 kB' 'KernelStack: 6528 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64316 kB' 'Slab: 140968 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.683 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.683 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.684 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.684 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.684 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.684 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.684 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.684 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.684 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.684 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.684 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.684 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.684 14:30:22 -- setup/common.sh@33 -- # echo 0 00:11:13.684 14:30:22 -- setup/common.sh@33 -- # return 0 00:11:13.684 node0=1025 expecting 1025 00:11:13.684 ************************************ 00:11:13.684 END TEST odd_alloc 00:11:13.684 ************************************ 00:11:13.684 14:30:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:13.684 14:30:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:13.684 14:30:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:13.684 14:30:22 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:11:13.684 14:30:22 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:11:13.684 00:11:13.684 real 0m0.569s 00:11:13.684 user 0m0.282s 00:11:13.684 sys 0m0.274s 00:11:13.684 14:30:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:13.684 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:11:13.684 14:30:22 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:11:13.684 14:30:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:13.684 14:30:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:13.684 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:11:13.684 ************************************ 00:11:13.684 START TEST custom_alloc 00:11:13.684 ************************************ 00:11:13.684 14:30:22 -- common/autotest_common.sh@1111 -- # custom_alloc 00:11:13.684 14:30:22 -- setup/hugepages.sh@167 -- # local IFS=, 00:11:13.684 14:30:22 -- setup/hugepages.sh@169 -- # local node 00:11:13.684 14:30:22 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:11:13.684 14:30:22 -- setup/hugepages.sh@170 -- # local nodes_hp 00:11:13.684 14:30:22 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:11:13.684 14:30:22 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:11:13.684 14:30:22 -- setup/hugepages.sh@49 -- # local size=1048576 00:11:13.684 14:30:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:13.684 14:30:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:13.684 14:30:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:13.684 14:30:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:11:13.684 14:30:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:13.684 14:30:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:13.684 14:30:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:13.684 14:30:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:13.684 14:30:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:13.684 14:30:22 -- setup/hugepages.sh@83 -- # : 0 00:11:13.684 14:30:22 -- setup/hugepages.sh@84 -- # : 0 00:11:13.684 14:30:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:11:13.684 14:30:22 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:13.684 14:30:22 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:13.684 14:30:22 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:11:13.684 14:30:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:13.684 14:30:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:11:13.684 14:30:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:13.684 14:30:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:13.684 14:30:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:13.684 14:30:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:13.684 14:30:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:11:13.684 14:30:22 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:13.684 14:30:22 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:13.684 14:30:22 -- setup/hugepages.sh@78 -- # return 0 00:11:13.684 14:30:22 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:11:13.684 14:30:22 -- setup/hugepages.sh@187 -- # setup output 00:11:13.684 14:30:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:13.684 14:30:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:13.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:13.943 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:13.943 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:13.943 14:30:22 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:11:13.943 14:30:22 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:11:13.943 14:30:22 -- setup/hugepages.sh@89 -- # local node 00:11:13.943 14:30:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:11:13.943 14:30:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:11:13.943 14:30:22 -- setup/hugepages.sh@92 -- # local surp 00:11:13.943 14:30:22 -- setup/hugepages.sh@93 -- # local resv 00:11:13.943 14:30:22 -- setup/hugepages.sh@94 -- # local anon 00:11:13.943 14:30:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:13.943 14:30:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:13.943 14:30:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:13.943 14:30:22 -- setup/common.sh@18 -- # local node= 00:11:13.943 14:30:22 -- setup/common.sh@19 -- # local var val 00:11:13.943 14:30:22 -- setup/common.sh@20 -- # local mem_f mem 00:11:13.943 14:30:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.943 14:30:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.943 14:30:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.943 14:30:22 -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.943 14:30:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740180 kB' 'MemAvailable: 10540632 kB' 'Buffers: 2436 kB' 'Cached: 2013260 kB' 'SwapCached: 0 kB' 'Active: 854440 kB' 'Inactive: 1283692 kB' 'Active(anon): 132900 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'AnonPages: 124256 kB' 'Mapped: 51680 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140956 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76640 kB' 'KernelStack: 6548 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # continue 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:13.943 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:13.943 14:30:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.205 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.205 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.206 14:30:22 -- setup/common.sh@33 -- # echo 0 00:11:14.206 14:30:22 -- setup/common.sh@33 -- # return 0 00:11:14.206 14:30:22 -- setup/hugepages.sh@97 -- # anon=0 00:11:14.206 14:30:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:14.206 14:30:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:14.206 14:30:22 -- setup/common.sh@18 -- # local node= 00:11:14.206 14:30:22 -- setup/common.sh@19 -- # local var val 00:11:14.206 14:30:22 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.206 14:30:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.206 14:30:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.206 14:30:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.206 14:30:22 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.206 14:30:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740180 kB' 'MemAvailable: 10540632 kB' 'Buffers: 2436 kB' 'Cached: 2013260 kB' 'SwapCached: 0 kB' 'Active: 854044 kB' 'Inactive: 1283692 kB' 'Active(anon): 132504 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'AnonPages: 123900 kB' 'Mapped: 51628 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140952 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76636 kB' 'KernelStack: 6544 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.206 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.206 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.207 14:30:22 -- setup/common.sh@33 -- # echo 0 00:11:14.207 14:30:22 -- setup/common.sh@33 -- # return 0 00:11:14.207 14:30:22 -- setup/hugepages.sh@99 -- # surp=0 00:11:14.207 14:30:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:14.207 14:30:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:14.207 14:30:22 -- setup/common.sh@18 -- # local node= 00:11:14.207 14:30:22 -- setup/common.sh@19 -- # local var val 00:11:14.207 14:30:22 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.207 14:30:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.207 14:30:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.207 14:30:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.207 14:30:22 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.207 14:30:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740180 kB' 'MemAvailable: 10540632 kB' 'Buffers: 2436 kB' 'Cached: 2013260 kB' 'SwapCached: 0 kB' 'Active: 854116 kB' 'Inactive: 1283692 kB' 'Active(anon): 132576 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'AnonPages: 123968 kB' 'Mapped: 51628 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140952 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76636 kB' 'KernelStack: 6560 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.207 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.207 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.208 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.208 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.208 14:30:22 -- setup/common.sh@33 -- # echo 0 00:11:14.208 14:30:22 -- setup/common.sh@33 -- # return 0 00:11:14.208 nr_hugepages=512 00:11:14.208 resv_hugepages=0 00:11:14.208 surplus_hugepages=0 00:11:14.208 anon_hugepages=0 00:11:14.208 14:30:22 -- setup/hugepages.sh@100 -- # resv=0 00:11:14.208 14:30:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:14.208 14:30:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:14.208 14:30:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:14.208 14:30:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:14.208 14:30:22 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:14.208 14:30:22 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:14.208 14:30:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:14.208 14:30:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:14.208 14:30:22 -- setup/common.sh@18 -- # local node= 00:11:14.209 14:30:22 -- setup/common.sh@19 -- # local var val 00:11:14.209 14:30:22 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.209 14:30:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.209 14:30:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.209 14:30:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.209 14:30:22 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.209 14:30:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740180 kB' 'MemAvailable: 10540632 kB' 'Buffers: 2436 kB' 'Cached: 2013260 kB' 'SwapCached: 0 kB' 'Active: 854076 kB' 'Inactive: 1283692 kB' 'Active(anon): 132536 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'AnonPages: 123972 kB' 'Mapped: 51628 kB' 'Shmem: 10464 kB' 'KReclaimable: 64316 kB' 'Slab: 140952 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76636 kB' 'KernelStack: 6560 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 357572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.209 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.209 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.210 14:30:22 -- setup/common.sh@33 -- # echo 512 00:11:14.210 14:30:22 -- setup/common.sh@33 -- # return 0 00:11:14.210 14:30:22 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:14.210 14:30:22 -- setup/hugepages.sh@112 -- # get_nodes 00:11:14.210 14:30:22 -- setup/hugepages.sh@27 -- # local node 00:11:14.210 14:30:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:14.210 14:30:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:14.210 14:30:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:14.210 14:30:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:14.210 14:30:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:14.210 14:30:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:14.210 14:30:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:14.210 14:30:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:14.210 14:30:22 -- setup/common.sh@18 -- # local node=0 00:11:14.210 14:30:22 -- setup/common.sh@19 -- # local var val 00:11:14.210 14:30:22 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.210 14:30:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.210 14:30:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:14.210 14:30:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:14.210 14:30:22 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.210 14:30:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8740432 kB' 'MemUsed: 3501544 kB' 'SwapCached: 0 kB' 'Active: 854080 kB' 'Inactive: 1283692 kB' 'Active(anon): 132540 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283692 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1128 kB' 'Writeback: 0 kB' 'FilePages: 2015696 kB' 'Mapped: 51628 kB' 'AnonPages: 123928 kB' 'Shmem: 10464 kB' 'KernelStack: 6544 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64316 kB' 'Slab: 140952 kB' 'SReclaimable: 64316 kB' 'SUnreclaim: 76636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.210 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.210 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # continue 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.211 14:30:22 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.211 14:30:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.211 14:30:22 -- setup/common.sh@33 -- # echo 0 00:11:14.211 14:30:22 -- setup/common.sh@33 -- # return 0 00:11:14.211 14:30:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:14.211 14:30:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:14.211 node0=512 expecting 512 00:11:14.211 ************************************ 00:11:14.211 END TEST custom_alloc 00:11:14.211 ************************************ 00:11:14.211 14:30:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:14.211 14:30:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:14.211 14:30:22 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:14.211 14:30:22 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:14.211 00:11:14.211 real 0m0.538s 00:11:14.211 user 0m0.291s 00:11:14.211 sys 0m0.249s 00:11:14.211 14:30:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:14.211 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:11:14.211 14:30:22 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:11:14.211 14:30:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:14.211 14:30:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.211 14:30:22 -- common/autotest_common.sh@10 -- # set +x 00:11:14.470 ************************************ 00:11:14.470 START TEST no_shrink_alloc 00:11:14.470 ************************************ 00:11:14.470 14:30:22 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:11:14.470 14:30:22 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:11:14.470 14:30:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:11:14.470 14:30:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:14.470 14:30:22 -- setup/hugepages.sh@51 -- # shift 00:11:14.470 14:30:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:14.470 14:30:22 -- setup/hugepages.sh@52 -- # local node_ids 00:11:14.470 14:30:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:14.470 14:30:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:14.470 14:30:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:14.470 14:30:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:14.470 14:30:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:11:14.470 14:30:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:14.470 14:30:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:14.470 14:30:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:14.470 14:30:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:14.470 14:30:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:14.470 14:30:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:14.470 14:30:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:14.470 14:30:22 -- setup/hugepages.sh@73 -- # return 0 00:11:14.470 14:30:22 -- setup/hugepages.sh@198 -- # setup output 00:11:14.470 14:30:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:14.470 14:30:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:14.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:14.731 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:14.731 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:14.731 14:30:23 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:11:14.731 14:30:23 -- setup/hugepages.sh@89 -- # local node 00:11:14.731 14:30:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:11:14.731 14:30:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:11:14.731 14:30:23 -- setup/hugepages.sh@92 -- # local surp 00:11:14.731 14:30:23 -- setup/hugepages.sh@93 -- # local resv 00:11:14.731 14:30:23 -- setup/hugepages.sh@94 -- # local anon 00:11:14.731 14:30:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:14.731 14:30:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:14.731 14:30:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:14.731 14:30:23 -- setup/common.sh@18 -- # local node= 00:11:14.731 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:14.731 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.731 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.731 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.731 14:30:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.731 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.731 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.731 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699280 kB' 'MemAvailable: 9499744 kB' 'Buffers: 2436 kB' 'Cached: 2013264 kB' 'SwapCached: 0 kB' 'Active: 854588 kB' 'Inactive: 1283696 kB' 'Active(anon): 133048 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'AnonPages: 124464 kB' 'Mapped: 51756 kB' 'Shmem: 10464 kB' 'KReclaimable: 64332 kB' 'Slab: 141016 kB' 'SReclaimable: 64332 kB' 'SUnreclaim: 76684 kB' 'KernelStack: 6580 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.732 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.732 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.733 14:30:23 -- setup/common.sh@33 -- # echo 0 00:11:14.733 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:14.733 14:30:23 -- setup/hugepages.sh@97 -- # anon=0 00:11:14.733 14:30:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:14.733 14:30:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:14.733 14:30:23 -- setup/common.sh@18 -- # local node= 00:11:14.733 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:14.733 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.733 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.733 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.733 14:30:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.733 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.733 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699280 kB' 'MemAvailable: 9499744 kB' 'Buffers: 2436 kB' 'Cached: 2013264 kB' 'SwapCached: 0 kB' 'Active: 854060 kB' 'Inactive: 1283696 kB' 'Active(anon): 132520 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'AnonPages: 123932 kB' 'Mapped: 51624 kB' 'Shmem: 10464 kB' 'KReclaimable: 64332 kB' 'Slab: 141016 kB' 'SReclaimable: 64332 kB' 'SUnreclaim: 76684 kB' 'KernelStack: 6560 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.733 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.733 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.734 14:30:23 -- setup/common.sh@33 -- # echo 0 00:11:14.734 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:14.734 14:30:23 -- setup/hugepages.sh@99 -- # surp=0 00:11:14.734 14:30:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:14.734 14:30:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:14.734 14:30:23 -- setup/common.sh@18 -- # local node= 00:11:14.734 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:14.734 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.734 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.734 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.734 14:30:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.734 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.734 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699688 kB' 'MemAvailable: 9500152 kB' 'Buffers: 2436 kB' 'Cached: 2013264 kB' 'SwapCached: 0 kB' 'Active: 854088 kB' 'Inactive: 1283696 kB' 'Active(anon): 132548 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'AnonPages: 123940 kB' 'Mapped: 51624 kB' 'Shmem: 10464 kB' 'KReclaimable: 64332 kB' 'Slab: 141020 kB' 'SReclaimable: 64332 kB' 'SUnreclaim: 76688 kB' 'KernelStack: 6560 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.734 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.734 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.735 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.735 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.735 14:30:23 -- setup/common.sh@33 -- # echo 0 00:11:14.735 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:14.735 14:30:23 -- setup/hugepages.sh@100 -- # resv=0 00:11:14.735 14:30:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:14.735 nr_hugepages=1024 00:11:14.735 14:30:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:14.735 resv_hugepages=0 00:11:14.735 14:30:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:14.735 surplus_hugepages=0 00:11:14.735 anon_hugepages=0 00:11:14.735 14:30:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:14.735 14:30:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:14.735 14:30:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:14.735 14:30:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:14.735 14:30:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:14.736 14:30:23 -- setup/common.sh@18 -- # local node= 00:11:14.736 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:14.736 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.736 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.736 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.736 14:30:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.736 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.736 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699436 kB' 'MemAvailable: 9499900 kB' 'Buffers: 2436 kB' 'Cached: 2013264 kB' 'SwapCached: 0 kB' 'Active: 854100 kB' 'Inactive: 1283696 kB' 'Active(anon): 132560 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'AnonPages: 123932 kB' 'Mapped: 51624 kB' 'Shmem: 10464 kB' 'KReclaimable: 64332 kB' 'Slab: 141020 kB' 'SReclaimable: 64332 kB' 'SUnreclaim: 76688 kB' 'KernelStack: 6560 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.736 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.736 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.737 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.737 14:30:23 -- setup/common.sh@33 -- # echo 1024 00:11:14.737 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:14.737 14:30:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:14.737 14:30:23 -- setup/hugepages.sh@112 -- # get_nodes 00:11:14.737 14:30:23 -- setup/hugepages.sh@27 -- # local node 00:11:14.737 14:30:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:14.737 14:30:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:14.737 14:30:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:14.737 14:30:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:14.737 14:30:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:14.737 14:30:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:14.737 14:30:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:14.737 14:30:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:14.737 14:30:23 -- setup/common.sh@18 -- # local node=0 00:11:14.737 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:14.737 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:14.737 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.737 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:14.737 14:30:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:14.737 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.737 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.737 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7699860 kB' 'MemUsed: 4542116 kB' 'SwapCached: 0 kB' 'Active: 854416 kB' 'Inactive: 1283696 kB' 'Active(anon): 132876 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'FilePages: 2015700 kB' 'Mapped: 51624 kB' 'AnonPages: 124080 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64332 kB' 'Slab: 141020 kB' 'SReclaimable: 64332 kB' 'SUnreclaim: 76688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.997 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.997 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.998 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.998 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.998 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.998 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.998 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.998 14:30:23 -- setup/common.sh@32 -- # continue 00:11:14.998 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:14.998 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:14.998 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.998 14:30:23 -- setup/common.sh@33 -- # echo 0 00:11:14.998 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:14.998 14:30:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:14.998 14:30:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:14.998 14:30:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:14.998 14:30:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:14.998 14:30:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:14.998 node0=1024 expecting 1024 00:11:14.998 14:30:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:14.998 14:30:23 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:11:14.998 14:30:23 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:11:14.998 14:30:23 -- setup/hugepages.sh@202 -- # setup output 00:11:14.998 14:30:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:14.998 14:30:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:15.260 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.260 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:15.260 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:15.260 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:11:15.260 14:30:23 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:11:15.260 14:30:23 -- setup/hugepages.sh@89 -- # local node 00:11:15.260 14:30:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:11:15.260 14:30:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:11:15.260 14:30:23 -- setup/hugepages.sh@92 -- # local surp 00:11:15.260 14:30:23 -- setup/hugepages.sh@93 -- # local resv 00:11:15.260 14:30:23 -- setup/hugepages.sh@94 -- # local anon 00:11:15.260 14:30:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:15.260 14:30:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:15.260 14:30:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:15.260 14:30:23 -- setup/common.sh@18 -- # local node= 00:11:15.260 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:15.260 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:15.260 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.260 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.260 14:30:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.260 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.260 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7698456 kB' 'MemAvailable: 9498916 kB' 'Buffers: 2436 kB' 'Cached: 2013264 kB' 'SwapCached: 0 kB' 'Active: 850028 kB' 'Inactive: 1283696 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'AnonPages: 119880 kB' 'Mapped: 51108 kB' 'Shmem: 10464 kB' 'KReclaimable: 64324 kB' 'Slab: 140904 kB' 'SReclaimable: 64324 kB' 'SUnreclaim: 76580 kB' 'KernelStack: 6452 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.260 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.260 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.261 14:30:23 -- setup/common.sh@33 -- # echo 0 00:11:15.261 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:15.261 14:30:23 -- setup/hugepages.sh@97 -- # anon=0 00:11:15.261 14:30:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:15.261 14:30:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:15.261 14:30:23 -- setup/common.sh@18 -- # local node= 00:11:15.261 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:15.261 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:15.261 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.261 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.261 14:30:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.261 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.261 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7698204 kB' 'MemAvailable: 9498664 kB' 'Buffers: 2436 kB' 'Cached: 2013264 kB' 'SwapCached: 0 kB' 'Active: 849912 kB' 'Inactive: 1283696 kB' 'Active(anon): 128372 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'AnonPages: 119504 kB' 'Mapped: 50988 kB' 'Shmem: 10464 kB' 'KReclaimable: 64324 kB' 'Slab: 140900 kB' 'SReclaimable: 64324 kB' 'SUnreclaim: 76576 kB' 'KernelStack: 6356 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.261 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.261 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.262 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.262 14:30:23 -- setup/common.sh@33 -- # echo 0 00:11:15.262 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:15.262 14:30:23 -- setup/hugepages.sh@99 -- # surp=0 00:11:15.262 14:30:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:15.262 14:30:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:15.262 14:30:23 -- setup/common.sh@18 -- # local node= 00:11:15.262 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:15.262 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:15.262 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.262 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.262 14:30:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.262 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.262 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.262 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7697952 kB' 'MemAvailable: 9498412 kB' 'Buffers: 2436 kB' 'Cached: 2013264 kB' 'SwapCached: 0 kB' 'Active: 849368 kB' 'Inactive: 1283696 kB' 'Active(anon): 127828 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'AnonPages: 119204 kB' 'Mapped: 50884 kB' 'Shmem: 10464 kB' 'KReclaimable: 64324 kB' 'Slab: 140900 kB' 'SReclaimable: 64324 kB' 'SUnreclaim: 76576 kB' 'KernelStack: 6448 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.263 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.263 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.264 14:30:23 -- setup/common.sh@33 -- # echo 0 00:11:15.264 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:15.264 nr_hugepages=1024 00:11:15.264 resv_hugepages=0 00:11:15.264 surplus_hugepages=0 00:11:15.264 14:30:23 -- setup/hugepages.sh@100 -- # resv=0 00:11:15.264 14:30:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:15.264 14:30:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:15.264 14:30:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:15.264 anon_hugepages=0 00:11:15.264 14:30:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:15.264 14:30:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:15.264 14:30:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:15.264 14:30:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:15.264 14:30:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:15.264 14:30:23 -- setup/common.sh@18 -- # local node= 00:11:15.264 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:15.264 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:15.264 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.264 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.264 14:30:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.264 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.264 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7697952 kB' 'MemAvailable: 9498412 kB' 'Buffers: 2436 kB' 'Cached: 2013264 kB' 'SwapCached: 0 kB' 'Active: 849592 kB' 'Inactive: 1283696 kB' 'Active(anon): 128052 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'AnonPages: 119244 kB' 'Mapped: 50884 kB' 'Shmem: 10464 kB' 'KReclaimable: 64324 kB' 'Slab: 140900 kB' 'SReclaimable: 64324 kB' 'SUnreclaim: 76576 kB' 'KernelStack: 6448 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.264 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.264 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.265 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.265 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.266 14:30:23 -- setup/common.sh@33 -- # echo 1024 00:11:15.266 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:15.266 14:30:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:15.266 14:30:23 -- setup/hugepages.sh@112 -- # get_nodes 00:11:15.266 14:30:23 -- setup/hugepages.sh@27 -- # local node 00:11:15.266 14:30:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:15.266 14:30:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:15.266 14:30:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:15.266 14:30:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:15.266 14:30:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:15.266 14:30:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:15.266 14:30:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:15.266 14:30:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:15.266 14:30:23 -- setup/common.sh@18 -- # local node=0 00:11:15.266 14:30:23 -- setup/common.sh@19 -- # local var val 00:11:15.266 14:30:23 -- setup/common.sh@20 -- # local mem_f mem 00:11:15.266 14:30:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.266 14:30:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:15.266 14:30:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:15.266 14:30:23 -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.266 14:30:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7697952 kB' 'MemUsed: 4544024 kB' 'SwapCached: 0 kB' 'Active: 849548 kB' 'Inactive: 1283696 kB' 'Active(anon): 128008 kB' 'Inactive(anon): 0 kB' 'Active(file): 721540 kB' 'Inactive(file): 1283696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1260 kB' 'Writeback: 0 kB' 'FilePages: 2015700 kB' 'Mapped: 50884 kB' 'AnonPages: 119136 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64324 kB' 'Slab: 140900 kB' 'SReclaimable: 64324 kB' 'SUnreclaim: 76576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.266 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.266 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # continue 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # IFS=': ' 00:11:15.267 14:30:23 -- setup/common.sh@31 -- # read -r var val _ 00:11:15.267 14:30:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.267 14:30:23 -- setup/common.sh@33 -- # echo 0 00:11:15.267 14:30:23 -- setup/common.sh@33 -- # return 0 00:11:15.267 14:30:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:15.267 14:30:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:15.267 14:30:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:15.267 14:30:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:15.267 14:30:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:15.267 node0=1024 expecting 1024 00:11:15.267 14:30:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:15.267 ************************************ 00:11:15.267 END TEST no_shrink_alloc 00:11:15.267 ************************************ 00:11:15.267 00:11:15.267 real 0m1.038s 00:11:15.267 user 0m0.514s 00:11:15.267 sys 0m0.534s 00:11:15.267 14:30:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:15.267 14:30:23 -- common/autotest_common.sh@10 -- # set +x 00:11:15.525 14:30:23 -- setup/hugepages.sh@217 -- # clear_hp 00:11:15.525 14:30:23 -- setup/hugepages.sh@37 -- # local node hp 00:11:15.525 14:30:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:15.525 14:30:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:15.525 14:30:23 -- setup/hugepages.sh@41 -- # echo 0 00:11:15.525 14:30:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:15.525 14:30:23 -- setup/hugepages.sh@41 -- # echo 0 00:11:15.525 14:30:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:15.525 14:30:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:15.525 ************************************ 00:11:15.525 END TEST hugepages 00:11:15.525 ************************************ 00:11:15.525 00:11:15.525 real 0m4.966s 00:11:15.525 user 0m2.343s 00:11:15.525 sys 0m2.527s 00:11:15.525 14:30:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:15.525 14:30:23 -- common/autotest_common.sh@10 -- # set +x 00:11:15.525 14:30:23 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:15.526 14:30:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:15.526 14:30:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.526 14:30:23 -- common/autotest_common.sh@10 -- # set +x 00:11:15.526 ************************************ 00:11:15.526 START TEST driver 00:11:15.526 ************************************ 00:11:15.526 14:30:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:15.526 * Looking for test storage... 00:11:15.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:15.526 14:30:24 -- setup/driver.sh@68 -- # setup reset 00:11:15.526 14:30:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:15.526 14:30:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:16.095 14:30:24 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:11:16.095 14:30:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.095 14:30:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.095 14:30:24 -- common/autotest_common.sh@10 -- # set +x 00:11:16.353 ************************************ 00:11:16.353 START TEST guess_driver 00:11:16.353 ************************************ 00:11:16.353 14:30:24 -- common/autotest_common.sh@1111 -- # guess_driver 00:11:16.353 14:30:24 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:11:16.353 14:30:24 -- setup/driver.sh@47 -- # local fail=0 00:11:16.353 14:30:24 -- setup/driver.sh@49 -- # pick_driver 00:11:16.353 14:30:24 -- setup/driver.sh@36 -- # vfio 00:11:16.353 14:30:24 -- setup/driver.sh@21 -- # local iommu_grups 00:11:16.354 14:30:24 -- setup/driver.sh@22 -- # local unsafe_vfio 00:11:16.354 14:30:24 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:11:16.354 14:30:24 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:11:16.354 14:30:24 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:11:16.354 14:30:24 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:11:16.354 14:30:24 -- setup/driver.sh@32 -- # return 1 00:11:16.354 14:30:24 -- setup/driver.sh@38 -- # uio 00:11:16.354 14:30:24 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:11:16.354 14:30:24 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:11:16.354 14:30:24 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:11:16.354 14:30:24 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:11:16.354 14:30:24 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:11:16.354 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:11:16.354 14:30:24 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:11:16.354 Looking for driver=uio_pci_generic 00:11:16.354 14:30:24 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:11:16.354 14:30:24 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:11:16.354 14:30:24 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:11:16.354 14:30:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:16.354 14:30:24 -- setup/driver.sh@45 -- # setup output config 00:11:16.354 14:30:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:16.354 14:30:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:16.920 14:30:25 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:11:16.920 14:30:25 -- setup/driver.sh@58 -- # continue 00:11:16.920 14:30:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:16.920 14:30:25 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:16.920 14:30:25 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:11:16.920 14:30:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.179 14:30:25 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.179 14:30:25 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:11:17.179 14:30:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.179 14:30:25 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:11:17.179 14:30:25 -- setup/driver.sh@65 -- # setup reset 00:11:17.179 14:30:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:17.179 14:30:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.746 00:11:17.746 real 0m1.411s 00:11:17.746 user 0m0.538s 00:11:17.746 sys 0m0.886s 00:11:17.746 ************************************ 00:11:17.746 END TEST guess_driver 00:11:17.746 ************************************ 00:11:17.746 14:30:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:17.746 14:30:26 -- common/autotest_common.sh@10 -- # set +x 00:11:17.746 00:11:17.746 real 0m2.158s 00:11:17.746 user 0m0.811s 00:11:17.746 sys 0m1.405s 00:11:17.746 14:30:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:17.746 ************************************ 00:11:17.746 END TEST driver 00:11:17.746 ************************************ 00:11:17.746 14:30:26 -- common/autotest_common.sh@10 -- # set +x 00:11:17.746 14:30:26 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:11:17.746 14:30:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.746 14:30:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.746 14:30:26 -- common/autotest_common.sh@10 -- # set +x 00:11:17.746 ************************************ 00:11:17.746 START TEST devices 00:11:17.746 ************************************ 00:11:17.746 14:30:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:11:17.746 * Looking for test storage... 00:11:17.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:17.746 14:30:26 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:11:17.746 14:30:26 -- setup/devices.sh@192 -- # setup reset 00:11:17.746 14:30:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:17.746 14:30:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:18.725 14:30:27 -- setup/devices.sh@194 -- # get_zoned_devs 00:11:18.725 14:30:27 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:11:18.725 14:30:27 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:11:18.725 14:30:27 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:11:18.725 14:30:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:18.725 14:30:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:11:18.725 14:30:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:11:18.725 14:30:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:18.725 14:30:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:18.725 14:30:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:18.725 14:30:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:11:18.725 14:30:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:11:18.725 14:30:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:11:18.725 14:30:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:18.725 14:30:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:18.725 14:30:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:11:18.725 14:30:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:11:18.725 14:30:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:11:18.725 14:30:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:18.725 14:30:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:18.725 14:30:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:11:18.725 14:30:27 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:11:18.725 14:30:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:18.725 14:30:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:18.725 14:30:27 -- setup/devices.sh@196 -- # blocks=() 00:11:18.725 14:30:27 -- setup/devices.sh@196 -- # declare -a blocks 00:11:18.725 14:30:27 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:11:18.725 14:30:27 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:11:18.725 14:30:27 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:11:18.726 14:30:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:18.726 14:30:27 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:11:18.726 14:30:27 -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:18.726 14:30:27 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:18.726 14:30:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:11:18.726 14:30:27 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:11:18.726 14:30:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:11:18.726 No valid GPT data, bailing 00:11:18.726 14:30:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:18.726 14:30:27 -- scripts/common.sh@391 -- # pt= 00:11:18.726 14:30:27 -- scripts/common.sh@392 -- # return 1 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:11:18.726 14:30:27 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:18.726 14:30:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:18.726 14:30:27 -- setup/common.sh@80 -- # echo 4294967296 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:18.726 14:30:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:18.726 14:30:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:18.726 14:30:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:18.726 14:30:27 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:11:18.726 14:30:27 -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:18.726 14:30:27 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:18.726 14:30:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:11:18.726 14:30:27 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:11:18.726 14:30:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:11:18.726 No valid GPT data, bailing 00:11:18.726 14:30:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:11:18.726 14:30:27 -- scripts/common.sh@391 -- # pt= 00:11:18.726 14:30:27 -- scripts/common.sh@392 -- # return 1 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:11:18.726 14:30:27 -- setup/common.sh@76 -- # local dev=nvme0n2 00:11:18.726 14:30:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:11:18.726 14:30:27 -- setup/common.sh@80 -- # echo 4294967296 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:18.726 14:30:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:18.726 14:30:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:18.726 14:30:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:18.726 14:30:27 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:11:18.726 14:30:27 -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:18.726 14:30:27 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:18.726 14:30:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:11:18.726 14:30:27 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:11:18.726 14:30:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:11:18.726 No valid GPT data, bailing 00:11:18.726 14:30:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:11:18.726 14:30:27 -- scripts/common.sh@391 -- # pt= 00:11:18.726 14:30:27 -- scripts/common.sh@392 -- # return 1 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:11:18.726 14:30:27 -- setup/common.sh@76 -- # local dev=nvme0n3 00:11:18.726 14:30:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:11:18.726 14:30:27 -- setup/common.sh@80 -- # echo 4294967296 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:18.726 14:30:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:18.726 14:30:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:18.726 14:30:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:18.726 14:30:27 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:11:18.726 14:30:27 -- setup/devices.sh@201 -- # ctrl=nvme1 00:11:18.726 14:30:27 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:11:18.726 14:30:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:18.726 14:30:27 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:11:18.726 14:30:27 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:11:18.726 14:30:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:11:18.984 No valid GPT data, bailing 00:11:18.984 14:30:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:18.984 14:30:27 -- scripts/common.sh@391 -- # pt= 00:11:18.984 14:30:27 -- scripts/common.sh@392 -- # return 1 00:11:18.984 14:30:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:11:18.984 14:30:27 -- setup/common.sh@76 -- # local dev=nvme1n1 00:11:18.984 14:30:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:11:18.984 14:30:27 -- setup/common.sh@80 -- # echo 5368709120 00:11:18.984 14:30:27 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:11:18.984 14:30:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:18.984 14:30:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:11:18.984 14:30:27 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:11:18.984 14:30:27 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:11:18.984 14:30:27 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:11:18.984 14:30:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.984 14:30:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.984 14:30:27 -- common/autotest_common.sh@10 -- # set +x 00:11:18.984 ************************************ 00:11:18.984 START TEST nvme_mount 00:11:18.984 ************************************ 00:11:18.984 14:30:27 -- common/autotest_common.sh@1111 -- # nvme_mount 00:11:18.984 14:30:27 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:11:18.984 14:30:27 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:11:18.984 14:30:27 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:18.984 14:30:27 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:18.984 14:30:27 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:11:18.984 14:30:27 -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:18.984 14:30:27 -- setup/common.sh@40 -- # local part_no=1 00:11:18.984 14:30:27 -- setup/common.sh@41 -- # local size=1073741824 00:11:18.984 14:30:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:18.984 14:30:27 -- setup/common.sh@44 -- # parts=() 00:11:18.984 14:30:27 -- setup/common.sh@44 -- # local parts 00:11:18.984 14:30:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:11:18.984 14:30:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:18.985 14:30:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:18.985 14:30:27 -- setup/common.sh@46 -- # (( part++ )) 00:11:18.985 14:30:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:18.985 14:30:27 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:18.985 14:30:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:18.985 14:30:27 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:11:19.920 Creating new GPT entries in memory. 00:11:19.920 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:19.920 other utilities. 00:11:19.920 14:30:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:11:19.920 14:30:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:19.920 14:30:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:19.920 14:30:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:19.920 14:30:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:21.298 Creating new GPT entries in memory. 00:11:21.298 The operation has completed successfully. 00:11:21.298 14:30:29 -- setup/common.sh@57 -- # (( part++ )) 00:11:21.298 14:30:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:21.298 14:30:29 -- setup/common.sh@62 -- # wait 56595 00:11:21.298 14:30:29 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.298 14:30:29 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:11:21.298 14:30:29 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.298 14:30:29 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:11:21.298 14:30:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:11:21.298 14:30:29 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.298 14:30:29 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:21.298 14:30:29 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:21.298 14:30:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:11:21.298 14:30:29 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.298 14:30:29 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:21.298 14:30:29 -- setup/devices.sh@53 -- # local found=0 00:11:21.298 14:30:29 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:21.298 14:30:29 -- setup/devices.sh@56 -- # : 00:11:21.298 14:30:29 -- setup/devices.sh@59 -- # local pci status 00:11:21.298 14:30:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:21.298 14:30:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:21.298 14:30:29 -- setup/devices.sh@47 -- # setup output config 00:11:21.298 14:30:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:21.298 14:30:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:21.298 14:30:29 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:21.298 14:30:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:11:21.298 14:30:29 -- setup/devices.sh@63 -- # found=1 00:11:21.298 14:30:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:21.298 14:30:29 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:21.298 14:30:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:21.557 14:30:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:21.557 14:30:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:21.557 14:30:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:21.557 14:30:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:21.557 14:30:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:21.557 14:30:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:11:21.557 14:30:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.557 14:30:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:21.557 14:30:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:21.557 14:30:30 -- setup/devices.sh@110 -- # cleanup_nvme 00:11:21.557 14:30:30 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.557 14:30:30 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.557 14:30:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:21.557 14:30:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:21.557 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:21.557 14:30:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:21.557 14:30:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:21.815 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:21.815 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:21.816 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:21.816 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:21.816 14:30:30 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:11:21.816 14:30:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:11:21.816 14:30:30 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.816 14:30:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:11:21.816 14:30:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:11:21.816 14:30:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.816 14:30:30 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:21.816 14:30:30 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:21.816 14:30:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:11:21.816 14:30:30 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:21.816 14:30:30 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:21.816 14:30:30 -- setup/devices.sh@53 -- # local found=0 00:11:21.816 14:30:30 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:21.816 14:30:30 -- setup/devices.sh@56 -- # : 00:11:21.816 14:30:30 -- setup/devices.sh@59 -- # local pci status 00:11:21.816 14:30:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:21.816 14:30:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:21.816 14:30:30 -- setup/devices.sh@47 -- # setup output config 00:11:21.816 14:30:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:21.816 14:30:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:22.075 14:30:30 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.075 14:30:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:11:22.075 14:30:30 -- setup/devices.sh@63 -- # found=1 00:11:22.075 14:30:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.075 14:30:30 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.075 14:30:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.334 14:30:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.334 14:30:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.334 14:30:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.334 14:30:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.334 14:30:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:22.334 14:30:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:11:22.334 14:30:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:22.334 14:30:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:22.334 14:30:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:22.334 14:30:30 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:22.334 14:30:30 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:11:22.334 14:30:30 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:22.334 14:30:30 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:11:22.334 14:30:30 -- setup/devices.sh@50 -- # local mount_point= 00:11:22.334 14:30:30 -- setup/devices.sh@51 -- # local test_file= 00:11:22.334 14:30:30 -- setup/devices.sh@53 -- # local found=0 00:11:22.334 14:30:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:22.334 14:30:30 -- setup/devices.sh@59 -- # local pci status 00:11:22.334 14:30:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.334 14:30:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:22.334 14:30:30 -- setup/devices.sh@47 -- # setup output config 00:11:22.334 14:30:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:22.334 14:30:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:22.593 14:30:31 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.593 14:30:31 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:11:22.593 14:30:31 -- setup/devices.sh@63 -- # found=1 00:11:22.593 14:30:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.593 14:30:31 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.593 14:30:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.851 14:30:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.851 14:30:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.851 14:30:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.851 14:30:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:23.110 14:30:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:23.110 14:30:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:23.110 14:30:31 -- setup/devices.sh@68 -- # return 0 00:11:23.110 14:30:31 -- setup/devices.sh@128 -- # cleanup_nvme 00:11:23.110 14:30:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:23.110 14:30:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:23.110 14:30:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:23.110 14:30:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:23.110 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:23.110 00:11:23.110 real 0m4.042s 00:11:23.110 user 0m0.722s 00:11:23.110 sys 0m1.069s 00:11:23.110 14:30:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:23.110 14:30:31 -- common/autotest_common.sh@10 -- # set +x 00:11:23.110 ************************************ 00:11:23.110 END TEST nvme_mount 00:11:23.110 ************************************ 00:11:23.110 14:30:31 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:11:23.110 14:30:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.110 14:30:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.110 14:30:31 -- common/autotest_common.sh@10 -- # set +x 00:11:23.110 ************************************ 00:11:23.110 START TEST dm_mount 00:11:23.110 ************************************ 00:11:23.110 14:30:31 -- common/autotest_common.sh@1111 -- # dm_mount 00:11:23.110 14:30:31 -- setup/devices.sh@144 -- # pv=nvme0n1 00:11:23.110 14:30:31 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:11:23.110 14:30:31 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:11:23.110 14:30:31 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:11:23.110 14:30:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:23.110 14:30:31 -- setup/common.sh@40 -- # local part_no=2 00:11:23.110 14:30:31 -- setup/common.sh@41 -- # local size=1073741824 00:11:23.110 14:30:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:23.110 14:30:31 -- setup/common.sh@44 -- # parts=() 00:11:23.110 14:30:31 -- setup/common.sh@44 -- # local parts 00:11:23.110 14:30:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:11:23.110 14:30:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:23.110 14:30:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:23.110 14:30:31 -- setup/common.sh@46 -- # (( part++ )) 00:11:23.111 14:30:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:23.111 14:30:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:23.111 14:30:31 -- setup/common.sh@46 -- # (( part++ )) 00:11:23.111 14:30:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:23.111 14:30:31 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:23.111 14:30:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:23.111 14:30:31 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:24.048 Creating new GPT entries in memory. 00:11:24.048 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:24.048 other utilities. 00:11:24.048 14:30:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:11:24.048 14:30:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:24.048 14:30:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:24.048 14:30:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:24.048 14:30:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:25.425 Creating new GPT entries in memory. 00:11:25.425 The operation has completed successfully. 00:11:25.425 14:30:33 -- setup/common.sh@57 -- # (( part++ )) 00:11:25.425 14:30:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:25.425 14:30:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:25.425 14:30:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:25.425 14:30:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:11:26.361 The operation has completed successfully. 00:11:26.361 14:30:34 -- setup/common.sh@57 -- # (( part++ )) 00:11:26.361 14:30:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:26.361 14:30:34 -- setup/common.sh@62 -- # wait 57059 00:11:26.361 14:30:34 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:26.361 14:30:34 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:26.361 14:30:34 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:26.361 14:30:34 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:26.361 14:30:34 -- setup/devices.sh@160 -- # for t in {1..5} 00:11:26.361 14:30:34 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:26.361 14:30:34 -- setup/devices.sh@161 -- # break 00:11:26.361 14:30:34 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:26.361 14:30:34 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:26.361 14:30:34 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:26.361 14:30:34 -- setup/devices.sh@166 -- # dm=dm-0 00:11:26.361 14:30:34 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:26.361 14:30:34 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:26.361 14:30:34 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:26.361 14:30:34 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:11:26.361 14:30:34 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:26.361 14:30:34 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:26.361 14:30:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:26.361 14:30:34 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:26.361 14:30:34 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:26.361 14:30:34 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:26.361 14:30:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:26.361 14:30:34 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:26.361 14:30:34 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:26.361 14:30:34 -- setup/devices.sh@53 -- # local found=0 00:11:26.361 14:30:34 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:26.361 14:30:34 -- setup/devices.sh@56 -- # : 00:11:26.361 14:30:34 -- setup/devices.sh@59 -- # local pci status 00:11:26.361 14:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.361 14:30:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:26.361 14:30:34 -- setup/devices.sh@47 -- # setup output config 00:11:26.361 14:30:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:26.361 14:30:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:26.361 14:30:34 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.361 14:30:34 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:26.361 14:30:34 -- setup/devices.sh@63 -- # found=1 00:11:26.361 14:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.361 14:30:34 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.361 14:30:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.619 14:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.619 14:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.619 14:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.619 14:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.878 14:30:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:26.878 14:30:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:11:26.878 14:30:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:26.878 14:30:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:26.878 14:30:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:26.878 14:30:35 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:26.878 14:30:35 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:26.878 14:30:35 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:26.878 14:30:35 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:26.878 14:30:35 -- setup/devices.sh@50 -- # local mount_point= 00:11:26.878 14:30:35 -- setup/devices.sh@51 -- # local test_file= 00:11:26.878 14:30:35 -- setup/devices.sh@53 -- # local found=0 00:11:26.878 14:30:35 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:26.878 14:30:35 -- setup/devices.sh@59 -- # local pci status 00:11:26.878 14:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.878 14:30:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:26.878 14:30:35 -- setup/devices.sh@47 -- # setup output config 00:11:26.878 14:30:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:26.878 14:30:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:26.878 14:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.878 14:30:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:26.878 14:30:35 -- setup/devices.sh@63 -- # found=1 00:11:26.878 14:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.878 14:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.878 14:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:27.136 14:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:27.136 14:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:27.136 14:30:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:27.136 14:30:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:27.394 14:30:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:27.394 14:30:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:27.394 14:30:35 -- setup/devices.sh@68 -- # return 0 00:11:27.394 14:30:35 -- setup/devices.sh@187 -- # cleanup_dm 00:11:27.394 14:30:35 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:27.394 14:30:35 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:27.394 14:30:35 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:27.394 14:30:35 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:27.394 14:30:35 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:27.394 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:27.394 14:30:35 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:27.394 14:30:35 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:27.394 00:11:27.394 real 0m4.212s 00:11:27.394 user 0m0.484s 00:11:27.394 sys 0m0.692s 00:11:27.394 14:30:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:27.394 ************************************ 00:11:27.394 END TEST dm_mount 00:11:27.394 14:30:35 -- common/autotest_common.sh@10 -- # set +x 00:11:27.394 ************************************ 00:11:27.394 14:30:35 -- setup/devices.sh@1 -- # cleanup 00:11:27.394 14:30:35 -- setup/devices.sh@11 -- # cleanup_nvme 00:11:27.394 14:30:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:27.394 14:30:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:27.394 14:30:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:27.395 14:30:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:27.395 14:30:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:27.654 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:27.654 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:27.654 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:27.654 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:27.654 14:30:36 -- setup/devices.sh@12 -- # cleanup_dm 00:11:27.654 14:30:36 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:27.654 14:30:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:27.654 14:30:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:27.654 14:30:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:27.654 14:30:36 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:27.654 14:30:36 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:27.654 00:11:27.654 real 0m9.869s 00:11:27.654 user 0m1.903s 00:11:27.654 sys 0m2.385s 00:11:27.654 14:30:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:27.654 14:30:36 -- common/autotest_common.sh@10 -- # set +x 00:11:27.654 ************************************ 00:11:27.654 END TEST devices 00:11:27.654 ************************************ 00:11:27.654 00:11:27.654 real 0m22.377s 00:11:27.654 user 0m7.323s 00:11:27.654 sys 0m9.316s 00:11:27.654 14:30:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:27.654 14:30:36 -- common/autotest_common.sh@10 -- # set +x 00:11:27.654 ************************************ 00:11:27.654 END TEST setup.sh 00:11:27.654 ************************************ 00:11:27.654 14:30:36 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:28.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:28.593 Hugepages 00:11:28.593 node hugesize free / total 00:11:28.593 node0 1048576kB 0 / 0 00:11:28.593 node0 2048kB 2048 / 2048 00:11:28.593 00:11:28.593 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:28.593 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:28.593 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:28.593 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:11:28.593 14:30:37 -- spdk/autotest.sh@130 -- # uname -s 00:11:28.593 14:30:37 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:28.593 14:30:37 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:28.593 14:30:37 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:29.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:29.443 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:29.443 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:29.443 14:30:37 -- common/autotest_common.sh@1518 -- # sleep 1 00:11:30.829 14:30:38 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:30.829 14:30:38 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:30.829 14:30:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:30.829 14:30:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:30.829 14:30:38 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:30.829 14:30:38 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:30.829 14:30:38 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:30.829 14:30:38 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:30.829 14:30:38 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:30.829 14:30:39 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:11:30.829 14:30:39 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:30.829 14:30:39 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:30.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:30.829 Waiting for block devices as requested 00:11:30.829 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.095 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.095 14:30:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:31.095 14:30:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:31.095 14:30:39 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:31.095 14:30:39 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:11:31.095 14:30:39 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:31.095 14:30:39 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:31.095 14:30:39 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:31.095 14:30:39 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:11:31.095 14:30:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:11:31.095 14:30:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:11:31.095 14:30:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:11:31.095 14:30:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:31.095 14:30:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:31.095 14:30:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:31.095 14:30:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:31.095 14:30:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:31.095 14:30:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:11:31.095 14:30:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:31.095 14:30:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:31.095 14:30:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:31.095 14:30:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:31.095 14:30:39 -- common/autotest_common.sh@1543 -- # continue 00:11:31.095 14:30:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:31.095 14:30:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:31.095 14:30:39 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:31.095 14:30:39 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:11:31.095 14:30:39 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:31.095 14:30:39 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:31.095 14:30:39 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:31.095 14:30:39 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:11:31.095 14:30:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:11:31.095 14:30:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:11:31.095 14:30:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:11:31.095 14:30:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:31.095 14:30:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:31.095 14:30:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:31.095 14:30:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:31.095 14:30:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:31.095 14:30:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:11:31.095 14:30:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:31.095 14:30:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:31.095 14:30:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:31.095 14:30:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:31.095 14:30:39 -- common/autotest_common.sh@1543 -- # continue 00:11:31.095 14:30:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:31.095 14:30:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:31.095 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:11:31.095 14:30:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:31.095 14:30:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:31.095 14:30:39 -- common/autotest_common.sh@10 -- # set +x 00:11:31.095 14:30:39 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:32.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:32.052 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.052 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.052 14:30:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:32.052 14:30:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:32.052 14:30:40 -- common/autotest_common.sh@10 -- # set +x 00:11:32.052 14:30:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:32.052 14:30:40 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:11:32.052 14:30:40 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:11:32.052 14:30:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:11:32.052 14:30:40 -- common/autotest_common.sh@1563 -- # local bdfs 00:11:32.052 14:30:40 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:11:32.052 14:30:40 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:32.052 14:30:40 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:32.052 14:30:40 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:32.052 14:30:40 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:32.052 14:30:40 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:32.311 14:30:40 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:11:32.311 14:30:40 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:32.311 14:30:40 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:11:32.311 14:30:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:32.311 14:30:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:32.311 14:30:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:32.311 14:30:40 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:11:32.311 14:30:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:32.311 14:30:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:32.311 14:30:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:32.311 14:30:40 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:11:32.311 14:30:40 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:11:32.311 14:30:40 -- common/autotest_common.sh@1579 -- # return 0 00:11:32.311 14:30:40 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:32.311 14:30:40 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:32.311 14:30:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:32.311 14:30:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:32.311 14:30:40 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:32.311 14:30:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:32.311 14:30:40 -- common/autotest_common.sh@10 -- # set +x 00:11:32.311 14:30:40 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:32.311 14:30:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:32.311 14:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.311 14:30:40 -- common/autotest_common.sh@10 -- # set +x 00:11:32.311 ************************************ 00:11:32.311 START TEST env 00:11:32.311 ************************************ 00:11:32.311 14:30:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:32.311 * Looking for test storage... 00:11:32.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:32.311 14:30:40 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:32.311 14:30:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:32.311 14:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.311 14:30:40 -- common/autotest_common.sh@10 -- # set +x 00:11:32.569 ************************************ 00:11:32.569 START TEST env_memory 00:11:32.569 ************************************ 00:11:32.569 14:30:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:32.569 00:11:32.569 00:11:32.569 CUnit - A unit testing framework for C - Version 2.1-3 00:11:32.569 http://cunit.sourceforge.net/ 00:11:32.569 00:11:32.569 00:11:32.569 Suite: memory 00:11:32.569 Test: alloc and free memory map ...[2024-04-17 14:30:40.955255] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:32.569 passed 00:11:32.569 Test: mem map translation ...[2024-04-17 14:30:40.986522] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:32.569 [2024-04-17 14:30:40.986734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:32.569 [2024-04-17 14:30:40.987047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:32.569 [2024-04-17 14:30:40.987225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:32.569 passed 00:11:32.569 Test: mem map registration ...[2024-04-17 14:30:41.051822] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:32.569 [2024-04-17 14:30:41.051894] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:32.569 passed 00:11:32.569 Test: mem map adjacent registrations ...passed 00:11:32.569 00:11:32.569 Run Summary: Type Total Ran Passed Failed Inactive 00:11:32.569 suites 1 1 n/a 0 0 00:11:32.569 tests 4 4 4 0 0 00:11:32.569 asserts 152 152 152 0 n/a 00:11:32.569 00:11:32.569 Elapsed time = 0.214 seconds 00:11:32.569 00:11:32.569 real 0m0.225s 00:11:32.569 user 0m0.213s 00:11:32.569 sys 0m0.009s 00:11:32.569 14:30:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:32.569 ************************************ 00:11:32.569 END TEST env_memory 00:11:32.569 ************************************ 00:11:32.569 14:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:32.827 14:30:41 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:32.827 14:30:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:32.827 14:30:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.827 14:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:32.827 ************************************ 00:11:32.827 START TEST env_vtophys 00:11:32.828 ************************************ 00:11:32.828 14:30:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:32.828 EAL: lib.eal log level changed from notice to debug 00:11:32.828 EAL: Detected lcore 0 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 1 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 2 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 3 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 4 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 5 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 6 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 7 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 8 as core 0 on socket 0 00:11:32.828 EAL: Detected lcore 9 as core 0 on socket 0 00:11:32.828 EAL: Maximum logical cores by configuration: 128 00:11:32.828 EAL: Detected CPU lcores: 10 00:11:32.828 EAL: Detected NUMA nodes: 1 00:11:32.828 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:11:32.828 EAL: Detected shared linkage of DPDK 00:11:32.828 EAL: No shared files mode enabled, IPC will be disabled 00:11:32.828 EAL: Selected IOVA mode 'PA' 00:11:32.828 EAL: Probing VFIO support... 00:11:32.828 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:32.828 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:32.828 EAL: Ask a virtual area of 0x2e000 bytes 00:11:32.828 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:32.828 EAL: Setting up physically contiguous memory... 00:11:32.828 EAL: Setting maximum number of open files to 524288 00:11:32.828 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:32.828 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:32.828 EAL: Ask a virtual area of 0x61000 bytes 00:11:32.828 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:32.828 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:32.828 EAL: Ask a virtual area of 0x400000000 bytes 00:11:32.828 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:32.828 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:32.828 EAL: Ask a virtual area of 0x61000 bytes 00:11:32.828 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:32.828 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:32.828 EAL: Ask a virtual area of 0x400000000 bytes 00:11:32.828 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:32.828 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:32.828 EAL: Ask a virtual area of 0x61000 bytes 00:11:32.828 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:32.828 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:32.828 EAL: Ask a virtual area of 0x400000000 bytes 00:11:32.828 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:32.828 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:32.828 EAL: Ask a virtual area of 0x61000 bytes 00:11:32.828 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:32.828 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:32.828 EAL: Ask a virtual area of 0x400000000 bytes 00:11:32.828 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:32.828 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:32.828 EAL: Hugepages will be freed exactly as allocated. 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: TSC frequency is ~2200000 KHz 00:11:32.828 EAL: Main lcore 0 is ready (tid=7f1297c5ea00;cpuset=[0]) 00:11:32.828 EAL: Trying to obtain current memory policy. 00:11:32.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:32.828 EAL: Restoring previous memory policy: 0 00:11:32.828 EAL: request: mp_malloc_sync 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: Heap on socket 0 was expanded by 2MB 00:11:32.828 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:32.828 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:32.828 EAL: Mem event callback 'spdk:(nil)' registered 00:11:32.828 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:32.828 00:11:32.828 00:11:32.828 CUnit - A unit testing framework for C - Version 2.1-3 00:11:32.828 http://cunit.sourceforge.net/ 00:11:32.828 00:11:32.828 00:11:32.828 Suite: components_suite 00:11:32.828 Test: vtophys_malloc_test ...passed 00:11:32.828 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:32.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:32.828 EAL: Restoring previous memory policy: 4 00:11:32.828 EAL: Calling mem event callback 'spdk:(nil)' 00:11:32.828 EAL: request: mp_malloc_sync 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: Heap on socket 0 was expanded by 4MB 00:11:32.828 EAL: Calling mem event callback 'spdk:(nil)' 00:11:32.828 EAL: request: mp_malloc_sync 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: Heap on socket 0 was shrunk by 4MB 00:11:32.828 EAL: Trying to obtain current memory policy. 00:11:32.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:32.828 EAL: Restoring previous memory policy: 4 00:11:32.828 EAL: Calling mem event callback 'spdk:(nil)' 00:11:32.828 EAL: request: mp_malloc_sync 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: Heap on socket 0 was expanded by 6MB 00:11:32.828 EAL: Calling mem event callback 'spdk:(nil)' 00:11:32.828 EAL: request: mp_malloc_sync 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: Heap on socket 0 was shrunk by 6MB 00:11:32.828 EAL: Trying to obtain current memory policy. 00:11:32.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:32.828 EAL: Restoring previous memory policy: 4 00:11:32.828 EAL: Calling mem event callback 'spdk:(nil)' 00:11:32.828 EAL: request: mp_malloc_sync 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: Heap on socket 0 was expanded by 10MB 00:11:32.828 EAL: Calling mem event callback 'spdk:(nil)' 00:11:32.828 EAL: request: mp_malloc_sync 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: Heap on socket 0 was shrunk by 10MB 00:11:32.828 EAL: Trying to obtain current memory policy. 00:11:32.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:32.828 EAL: Restoring previous memory policy: 4 00:11:32.828 EAL: Calling mem event callback 'spdk:(nil)' 00:11:32.828 EAL: request: mp_malloc_sync 00:11:32.828 EAL: No shared files mode enabled, IPC is disabled 00:11:32.828 EAL: Heap on socket 0 was expanded by 18MB 00:11:33.086 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was shrunk by 18MB 00:11:33.087 EAL: Trying to obtain current memory policy. 00:11:33.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:33.087 EAL: Restoring previous memory policy: 4 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was expanded by 34MB 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was shrunk by 34MB 00:11:33.087 EAL: Trying to obtain current memory policy. 00:11:33.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:33.087 EAL: Restoring previous memory policy: 4 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was expanded by 66MB 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was shrunk by 66MB 00:11:33.087 EAL: Trying to obtain current memory policy. 00:11:33.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:33.087 EAL: Restoring previous memory policy: 4 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was expanded by 130MB 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was shrunk by 130MB 00:11:33.087 EAL: Trying to obtain current memory policy. 00:11:33.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:33.087 EAL: Restoring previous memory policy: 4 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was expanded by 258MB 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was shrunk by 258MB 00:11:33.087 EAL: Trying to obtain current memory policy. 00:11:33.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:33.087 EAL: Restoring previous memory policy: 4 00:11:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.087 EAL: request: mp_malloc_sync 00:11:33.087 EAL: No shared files mode enabled, IPC is disabled 00:11:33.087 EAL: Heap on socket 0 was expanded by 514MB 00:11:33.345 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.345 EAL: request: mp_malloc_sync 00:11:33.345 EAL: No shared files mode enabled, IPC is disabled 00:11:33.345 EAL: Heap on socket 0 was shrunk by 514MB 00:11:33.345 EAL: Trying to obtain current memory policy. 00:11:33.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:33.345 EAL: Restoring previous memory policy: 4 00:11:33.345 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.345 EAL: request: mp_malloc_sync 00:11:33.345 EAL: No shared files mode enabled, IPC is disabled 00:11:33.345 EAL: Heap on socket 0 was expanded by 1026MB 00:11:33.603 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.603 passed 00:11:33.603 00:11:33.603 Run Summary: Type Total Ran Passed Failed Inactive 00:11:33.603 suites 1 1 n/a 0 0 00:11:33.603 tests 2 2 2 0 0 00:11:33.603 asserts 5183 5183 5183 0 n/a 00:11:33.603 00:11:33.603 Elapsed time = 0.725 seconds 00:11:33.603 EAL: request: mp_malloc_sync 00:11:33.603 EAL: No shared files mode enabled, IPC is disabled 00:11:33.603 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:33.603 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.603 EAL: request: mp_malloc_sync 00:11:33.603 EAL: No shared files mode enabled, IPC is disabled 00:11:33.603 EAL: Heap on socket 0 was shrunk by 2MB 00:11:33.603 EAL: No shared files mode enabled, IPC is disabled 00:11:33.603 EAL: No shared files mode enabled, IPC is disabled 00:11:33.603 EAL: No shared files mode enabled, IPC is disabled 00:11:33.603 ************************************ 00:11:33.603 END TEST env_vtophys 00:11:33.603 ************************************ 00:11:33.603 00:11:33.603 real 0m0.917s 00:11:33.603 user 0m0.477s 00:11:33.603 sys 0m0.308s 00:11:33.603 14:30:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:33.603 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:33.862 14:30:42 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:33.862 14:30:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:33.862 14:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.862 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:33.862 ************************************ 00:11:33.862 START TEST env_pci 00:11:33.862 ************************************ 00:11:33.862 14:30:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:33.862 00:11:33.862 00:11:33.862 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.862 http://cunit.sourceforge.net/ 00:11:33.862 00:11:33.862 00:11:33.862 Suite: pci 00:11:33.862 Test: pci_hook ...[2024-04-17 14:30:42.288399] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58311 has claimed it 00:11:33.862 passed 00:11:33.862 00:11:33.862 Run Summary: Type Total Ran Passed Failed Inactive 00:11:33.862 suites 1 1 n/a 0 0 00:11:33.862 tests 1 1 1 0 0 00:11:33.862 asserts 25 25 25 0 n/a 00:11:33.862 00:11:33.862 Elapsed time = 0.003 seconds 00:11:33.862 EAL: Cannot find device (10000:00:01.0) 00:11:33.862 EAL: Failed to attach device on primary process 00:11:33.862 00:11:33.862 real 0m0.019s 00:11:33.862 user 0m0.007s 00:11:33.862 sys 0m0.012s 00:11:33.862 14:30:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:33.862 ************************************ 00:11:33.862 END TEST env_pci 00:11:33.862 ************************************ 00:11:33.862 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:33.862 14:30:42 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:33.862 14:30:42 -- env/env.sh@15 -- # uname 00:11:33.862 14:30:42 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:33.862 14:30:42 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:33.862 14:30:42 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:33.862 14:30:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:33.862 14:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.862 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:33.862 ************************************ 00:11:33.862 START TEST env_dpdk_post_init 00:11:33.862 ************************************ 00:11:33.862 14:30:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:33.862 EAL: Detected CPU lcores: 10 00:11:33.862 EAL: Detected NUMA nodes: 1 00:11:33.862 EAL: Detected shared linkage of DPDK 00:11:33.862 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:33.862 EAL: Selected IOVA mode 'PA' 00:11:34.123 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:34.123 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:34.123 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:34.123 Starting DPDK initialization... 00:11:34.123 Starting SPDK post initialization... 00:11:34.123 SPDK NVMe probe 00:11:34.123 Attaching to 0000:00:10.0 00:11:34.123 Attaching to 0000:00:11.0 00:11:34.123 Attached to 0000:00:10.0 00:11:34.123 Attached to 0000:00:11.0 00:11:34.123 Cleaning up... 00:11:34.123 00:11:34.123 real 0m0.169s 00:11:34.123 user 0m0.039s 00:11:34.123 sys 0m0.030s 00:11:34.123 14:30:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:34.123 ************************************ 00:11:34.123 END TEST env_dpdk_post_init 00:11:34.123 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:34.123 ************************************ 00:11:34.123 14:30:42 -- env/env.sh@26 -- # uname 00:11:34.123 14:30:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:34.123 14:30:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:34.123 14:30:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:34.123 14:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.123 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:34.123 ************************************ 00:11:34.123 START TEST env_mem_callbacks 00:11:34.123 ************************************ 00:11:34.123 14:30:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:34.123 EAL: Detected CPU lcores: 10 00:11:34.123 EAL: Detected NUMA nodes: 1 00:11:34.123 EAL: Detected shared linkage of DPDK 00:11:34.123 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:34.123 EAL: Selected IOVA mode 'PA' 00:11:34.382 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:34.382 00:11:34.382 00:11:34.382 CUnit - A unit testing framework for C - Version 2.1-3 00:11:34.382 http://cunit.sourceforge.net/ 00:11:34.382 00:11:34.382 00:11:34.382 Suite: memory 00:11:34.382 Test: test ... 00:11:34.382 register 0x200000200000 2097152 00:11:34.382 malloc 3145728 00:11:34.382 register 0x200000400000 4194304 00:11:34.382 buf 0x200000500000 len 3145728 PASSED 00:11:34.382 malloc 64 00:11:34.382 buf 0x2000004fff40 len 64 PASSED 00:11:34.382 malloc 4194304 00:11:34.382 register 0x200000800000 6291456 00:11:34.382 buf 0x200000a00000 len 4194304 PASSED 00:11:34.382 free 0x200000500000 3145728 00:11:34.382 free 0x2000004fff40 64 00:11:34.382 unregister 0x200000400000 4194304 PASSED 00:11:34.382 free 0x200000a00000 4194304 00:11:34.382 unregister 0x200000800000 6291456 PASSED 00:11:34.382 malloc 8388608 00:11:34.382 register 0x200000400000 10485760 00:11:34.382 buf 0x200000600000 len 8388608 PASSED 00:11:34.382 free 0x200000600000 8388608 00:11:34.382 unregister 0x200000400000 10485760 PASSED 00:11:34.382 passed 00:11:34.382 00:11:34.382 Run Summary: Type Total Ran Passed Failed Inactive 00:11:34.382 suites 1 1 n/a 0 0 00:11:34.382 tests 1 1 1 0 0 00:11:34.382 asserts 15 15 15 0 n/a 00:11:34.382 00:11:34.382 Elapsed time = 0.005 seconds 00:11:34.382 00:11:34.382 real 0m0.141s 00:11:34.382 user 0m0.016s 00:11:34.382 sys 0m0.024s 00:11:34.382 14:30:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:34.382 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:34.382 ************************************ 00:11:34.382 END TEST env_mem_callbacks 00:11:34.382 ************************************ 00:11:34.382 ************************************ 00:11:34.382 END TEST env 00:11:34.382 ************************************ 00:11:34.382 00:11:34.382 real 0m2.093s 00:11:34.382 user 0m0.970s 00:11:34.382 sys 0m0.722s 00:11:34.382 14:30:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:34.382 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:34.382 14:30:42 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:34.382 14:30:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:34.382 14:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.382 14:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:34.382 ************************************ 00:11:34.382 START TEST rpc 00:11:34.382 ************************************ 00:11:34.382 14:30:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:34.640 * Looking for test storage... 00:11:34.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:34.640 14:30:43 -- rpc/rpc.sh@65 -- # spdk_pid=58440 00:11:34.640 14:30:43 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:34.640 14:30:43 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:34.640 14:30:43 -- rpc/rpc.sh@67 -- # waitforlisten 58440 00:11:34.640 14:30:43 -- common/autotest_common.sh@817 -- # '[' -z 58440 ']' 00:11:34.640 14:30:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.640 14:30:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:34.640 14:30:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.640 14:30:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:34.640 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:34.640 [2024-04-17 14:30:43.089246] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:34.640 [2024-04-17 14:30:43.089560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58440 ] 00:11:34.640 [2024-04-17 14:30:43.224435] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.910 [2024-04-17 14:30:43.290926] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:34.910 [2024-04-17 14:30:43.290997] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58440' to capture a snapshot of events at runtime. 00:11:34.910 [2024-04-17 14:30:43.291010] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.910 [2024-04-17 14:30:43.291018] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.910 [2024-04-17 14:30:43.291026] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58440 for offline analysis/debug. 00:11:34.910 [2024-04-17 14:30:43.291052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.910 14:30:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:34.910 14:30:43 -- common/autotest_common.sh@850 -- # return 0 00:11:34.910 14:30:43 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:34.910 14:30:43 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:34.910 14:30:43 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:34.910 14:30:43 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:34.910 14:30:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:34.910 14:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.910 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.169 ************************************ 00:11:35.169 START TEST rpc_integrity 00:11:35.169 ************************************ 00:11:35.169 14:30:43 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:11:35.169 14:30:43 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.169 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.169 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.169 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.169 14:30:43 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:35.169 14:30:43 -- rpc/rpc.sh@13 -- # jq length 00:11:35.169 14:30:43 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:35.169 14:30:43 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:35.169 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.169 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.169 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.169 14:30:43 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:35.169 14:30:43 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:35.169 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.169 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.169 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.169 14:30:43 -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:35.169 { 00:11:35.169 "name": "Malloc0", 00:11:35.169 "aliases": [ 00:11:35.169 "7418bfd0-a39c-4a92-ba68-8ee30fa4617f" 00:11:35.169 ], 00:11:35.169 "product_name": "Malloc disk", 00:11:35.169 "block_size": 512, 00:11:35.169 "num_blocks": 16384, 00:11:35.169 "uuid": "7418bfd0-a39c-4a92-ba68-8ee30fa4617f", 00:11:35.169 "assigned_rate_limits": { 00:11:35.169 "rw_ios_per_sec": 0, 00:11:35.169 "rw_mbytes_per_sec": 0, 00:11:35.169 "r_mbytes_per_sec": 0, 00:11:35.169 "w_mbytes_per_sec": 0 00:11:35.169 }, 00:11:35.169 "claimed": false, 00:11:35.169 "zoned": false, 00:11:35.169 "supported_io_types": { 00:11:35.169 "read": true, 00:11:35.169 "write": true, 00:11:35.169 "unmap": true, 00:11:35.169 "write_zeroes": true, 00:11:35.169 "flush": true, 00:11:35.169 "reset": true, 00:11:35.169 "compare": false, 00:11:35.170 "compare_and_write": false, 00:11:35.170 "abort": true, 00:11:35.170 "nvme_admin": false, 00:11:35.170 "nvme_io": false 00:11:35.170 }, 00:11:35.170 "memory_domains": [ 00:11:35.170 { 00:11:35.170 "dma_device_id": "system", 00:11:35.170 "dma_device_type": 1 00:11:35.170 }, 00:11:35.170 { 00:11:35.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.170 "dma_device_type": 2 00:11:35.170 } 00:11:35.170 ], 00:11:35.170 "driver_specific": {} 00:11:35.170 } 00:11:35.170 ]' 00:11:35.170 14:30:43 -- rpc/rpc.sh@17 -- # jq length 00:11:35.170 14:30:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:35.170 14:30:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:35.170 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.170 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.170 [2024-04-17 14:30:43.683235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:35.170 [2024-04-17 14:30:43.683295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.170 [2024-04-17 14:30:43.683317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e70b10 00:11:35.170 [2024-04-17 14:30:43.683326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.170 [2024-04-17 14:30:43.684856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.170 [2024-04-17 14:30:43.684896] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:35.170 Passthru0 00:11:35.170 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.170 14:30:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:35.170 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.170 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.170 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.170 14:30:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:35.170 { 00:11:35.170 "name": "Malloc0", 00:11:35.170 "aliases": [ 00:11:35.170 "7418bfd0-a39c-4a92-ba68-8ee30fa4617f" 00:11:35.170 ], 00:11:35.170 "product_name": "Malloc disk", 00:11:35.170 "block_size": 512, 00:11:35.170 "num_blocks": 16384, 00:11:35.170 "uuid": "7418bfd0-a39c-4a92-ba68-8ee30fa4617f", 00:11:35.170 "assigned_rate_limits": { 00:11:35.170 "rw_ios_per_sec": 0, 00:11:35.170 "rw_mbytes_per_sec": 0, 00:11:35.170 "r_mbytes_per_sec": 0, 00:11:35.170 "w_mbytes_per_sec": 0 00:11:35.170 }, 00:11:35.170 "claimed": true, 00:11:35.170 "claim_type": "exclusive_write", 00:11:35.170 "zoned": false, 00:11:35.170 "supported_io_types": { 00:11:35.170 "read": true, 00:11:35.170 "write": true, 00:11:35.170 "unmap": true, 00:11:35.170 "write_zeroes": true, 00:11:35.170 "flush": true, 00:11:35.170 "reset": true, 00:11:35.170 "compare": false, 00:11:35.170 "compare_and_write": false, 00:11:35.170 "abort": true, 00:11:35.170 "nvme_admin": false, 00:11:35.170 "nvme_io": false 00:11:35.170 }, 00:11:35.170 "memory_domains": [ 00:11:35.170 { 00:11:35.170 "dma_device_id": "system", 00:11:35.170 "dma_device_type": 1 00:11:35.170 }, 00:11:35.170 { 00:11:35.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.170 "dma_device_type": 2 00:11:35.170 } 00:11:35.170 ], 00:11:35.170 "driver_specific": {} 00:11:35.170 }, 00:11:35.170 { 00:11:35.170 "name": "Passthru0", 00:11:35.170 "aliases": [ 00:11:35.170 "00607c53-7c8b-5924-85fc-f89da9bf819d" 00:11:35.170 ], 00:11:35.170 "product_name": "passthru", 00:11:35.170 "block_size": 512, 00:11:35.170 "num_blocks": 16384, 00:11:35.170 "uuid": "00607c53-7c8b-5924-85fc-f89da9bf819d", 00:11:35.170 "assigned_rate_limits": { 00:11:35.170 "rw_ios_per_sec": 0, 00:11:35.170 "rw_mbytes_per_sec": 0, 00:11:35.170 "r_mbytes_per_sec": 0, 00:11:35.170 "w_mbytes_per_sec": 0 00:11:35.170 }, 00:11:35.170 "claimed": false, 00:11:35.170 "zoned": false, 00:11:35.170 "supported_io_types": { 00:11:35.170 "read": true, 00:11:35.170 "write": true, 00:11:35.170 "unmap": true, 00:11:35.170 "write_zeroes": true, 00:11:35.170 "flush": true, 00:11:35.170 "reset": true, 00:11:35.170 "compare": false, 00:11:35.170 "compare_and_write": false, 00:11:35.170 "abort": true, 00:11:35.170 "nvme_admin": false, 00:11:35.170 "nvme_io": false 00:11:35.170 }, 00:11:35.170 "memory_domains": [ 00:11:35.170 { 00:11:35.170 "dma_device_id": "system", 00:11:35.170 "dma_device_type": 1 00:11:35.170 }, 00:11:35.170 { 00:11:35.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.170 "dma_device_type": 2 00:11:35.170 } 00:11:35.170 ], 00:11:35.170 "driver_specific": { 00:11:35.170 "passthru": { 00:11:35.170 "name": "Passthru0", 00:11:35.170 "base_bdev_name": "Malloc0" 00:11:35.170 } 00:11:35.170 } 00:11:35.170 } 00:11:35.170 ]' 00:11:35.170 14:30:43 -- rpc/rpc.sh@21 -- # jq length 00:11:35.429 14:30:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:35.429 14:30:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:35.429 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.429 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.429 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.429 14:30:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:35.429 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.429 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.429 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.429 14:30:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:35.429 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.429 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.429 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.429 14:30:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:35.429 14:30:43 -- rpc/rpc.sh@26 -- # jq length 00:11:35.429 ************************************ 00:11:35.429 END TEST rpc_integrity 00:11:35.429 ************************************ 00:11:35.429 14:30:43 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:35.429 00:11:35.429 real 0m0.337s 00:11:35.429 user 0m0.240s 00:11:35.429 sys 0m0.033s 00:11:35.429 14:30:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.429 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.429 14:30:43 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:35.429 14:30:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:35.429 14:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.429 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.429 ************************************ 00:11:35.429 START TEST rpc_plugins 00:11:35.429 ************************************ 00:11:35.429 14:30:43 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:11:35.429 14:30:43 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:35.429 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.429 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.429 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.429 14:30:43 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:35.429 14:30:43 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:35.429 14:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.429 14:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.429 14:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.429 14:30:43 -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:35.429 { 00:11:35.429 "name": "Malloc1", 00:11:35.429 "aliases": [ 00:11:35.429 "67ae5a5b-8022-4e17-96f4-04d2b8955418" 00:11:35.429 ], 00:11:35.429 "product_name": "Malloc disk", 00:11:35.429 "block_size": 4096, 00:11:35.429 "num_blocks": 256, 00:11:35.429 "uuid": "67ae5a5b-8022-4e17-96f4-04d2b8955418", 00:11:35.429 "assigned_rate_limits": { 00:11:35.429 "rw_ios_per_sec": 0, 00:11:35.429 "rw_mbytes_per_sec": 0, 00:11:35.429 "r_mbytes_per_sec": 0, 00:11:35.429 "w_mbytes_per_sec": 0 00:11:35.429 }, 00:11:35.429 "claimed": false, 00:11:35.429 "zoned": false, 00:11:35.429 "supported_io_types": { 00:11:35.429 "read": true, 00:11:35.429 "write": true, 00:11:35.429 "unmap": true, 00:11:35.429 "write_zeroes": true, 00:11:35.429 "flush": true, 00:11:35.429 "reset": true, 00:11:35.429 "compare": false, 00:11:35.429 "compare_and_write": false, 00:11:35.429 "abort": true, 00:11:35.429 "nvme_admin": false, 00:11:35.429 "nvme_io": false 00:11:35.429 }, 00:11:35.429 "memory_domains": [ 00:11:35.429 { 00:11:35.429 "dma_device_id": "system", 00:11:35.429 "dma_device_type": 1 00:11:35.429 }, 00:11:35.429 { 00:11:35.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.429 "dma_device_type": 2 00:11:35.429 } 00:11:35.429 ], 00:11:35.429 "driver_specific": {} 00:11:35.429 } 00:11:35.429 ]' 00:11:35.429 14:30:43 -- rpc/rpc.sh@32 -- # jq length 00:11:35.688 14:30:44 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:35.688 14:30:44 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:35.688 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.688 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.688 14:30:44 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:35.688 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.688 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.688 14:30:44 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:35.688 14:30:44 -- rpc/rpc.sh@36 -- # jq length 00:11:35.688 ************************************ 00:11:35.688 END TEST rpc_plugins 00:11:35.688 ************************************ 00:11:35.688 14:30:44 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:35.688 00:11:35.688 real 0m0.166s 00:11:35.688 user 0m0.118s 00:11:35.688 sys 0m0.010s 00:11:35.688 14:30:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.688 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 14:30:44 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:35.688 14:30:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:35.688 14:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.688 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 ************************************ 00:11:35.688 START TEST rpc_trace_cmd_test 00:11:35.688 ************************************ 00:11:35.688 14:30:44 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:11:35.688 14:30:44 -- rpc/rpc.sh@40 -- # local info 00:11:35.688 14:30:44 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:35.688 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.688 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:35.688 14:30:44 -- rpc/rpc.sh@42 -- # info='{ 00:11:35.688 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58440", 00:11:35.688 "tpoint_group_mask": "0x8", 00:11:35.688 "iscsi_conn": { 00:11:35.688 "mask": "0x2", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "scsi": { 00:11:35.688 "mask": "0x4", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "bdev": { 00:11:35.688 "mask": "0x8", 00:11:35.688 "tpoint_mask": "0xffffffffffffffff" 00:11:35.688 }, 00:11:35.688 "nvmf_rdma": { 00:11:35.688 "mask": "0x10", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "nvmf_tcp": { 00:11:35.688 "mask": "0x20", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "ftl": { 00:11:35.688 "mask": "0x40", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "blobfs": { 00:11:35.688 "mask": "0x80", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "dsa": { 00:11:35.688 "mask": "0x200", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "thread": { 00:11:35.688 "mask": "0x400", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "nvme_pcie": { 00:11:35.688 "mask": "0x800", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "iaa": { 00:11:35.688 "mask": "0x1000", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "nvme_tcp": { 00:11:35.688 "mask": "0x2000", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "bdev_nvme": { 00:11:35.688 "mask": "0x4000", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 }, 00:11:35.688 "sock": { 00:11:35.688 "mask": "0x8000", 00:11:35.688 "tpoint_mask": "0x0" 00:11:35.688 } 00:11:35.688 }' 00:11:35.688 14:30:44 -- rpc/rpc.sh@43 -- # jq length 00:11:35.947 14:30:44 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:11:35.947 14:30:44 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:35.947 14:30:44 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:35.947 14:30:44 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:35.947 14:30:44 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:35.947 14:30:44 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:35.947 14:30:44 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:35.947 14:30:44 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:35.947 ************************************ 00:11:35.947 END TEST rpc_trace_cmd_test 00:11:35.947 ************************************ 00:11:35.947 14:30:44 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:35.947 00:11:35.947 real 0m0.283s 00:11:35.947 user 0m0.245s 00:11:35.947 sys 0m0.028s 00:11:35.947 14:30:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.947 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.205 14:30:44 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:36.205 14:30:44 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:36.205 14:30:44 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:36.205 14:30:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:36.205 14:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.205 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.205 ************************************ 00:11:36.205 START TEST rpc_daemon_integrity 00:11:36.205 ************************************ 00:11:36.205 14:30:44 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:11:36.205 14:30:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.205 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.205 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.205 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.205 14:30:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:36.205 14:30:44 -- rpc/rpc.sh@13 -- # jq length 00:11:36.205 14:30:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:36.205 14:30:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:36.205 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.205 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.205 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.205 14:30:44 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:36.205 14:30:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:36.205 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.205 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.205 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.205 14:30:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:36.206 { 00:11:36.206 "name": "Malloc2", 00:11:36.206 "aliases": [ 00:11:36.206 "a66301e5-0b62-4433-b74b-70d9d3446dca" 00:11:36.206 ], 00:11:36.206 "product_name": "Malloc disk", 00:11:36.206 "block_size": 512, 00:11:36.206 "num_blocks": 16384, 00:11:36.206 "uuid": "a66301e5-0b62-4433-b74b-70d9d3446dca", 00:11:36.206 "assigned_rate_limits": { 00:11:36.206 "rw_ios_per_sec": 0, 00:11:36.206 "rw_mbytes_per_sec": 0, 00:11:36.206 "r_mbytes_per_sec": 0, 00:11:36.206 "w_mbytes_per_sec": 0 00:11:36.206 }, 00:11:36.206 "claimed": false, 00:11:36.206 "zoned": false, 00:11:36.206 "supported_io_types": { 00:11:36.206 "read": true, 00:11:36.206 "write": true, 00:11:36.206 "unmap": true, 00:11:36.206 "write_zeroes": true, 00:11:36.206 "flush": true, 00:11:36.206 "reset": true, 00:11:36.206 "compare": false, 00:11:36.206 "compare_and_write": false, 00:11:36.206 "abort": true, 00:11:36.206 "nvme_admin": false, 00:11:36.206 "nvme_io": false 00:11:36.206 }, 00:11:36.206 "memory_domains": [ 00:11:36.206 { 00:11:36.206 "dma_device_id": "system", 00:11:36.206 "dma_device_type": 1 00:11:36.206 }, 00:11:36.206 { 00:11:36.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.206 "dma_device_type": 2 00:11:36.206 } 00:11:36.206 ], 00:11:36.206 "driver_specific": {} 00:11:36.206 } 00:11:36.206 ]' 00:11:36.206 14:30:44 -- rpc/rpc.sh@17 -- # jq length 00:11:36.206 14:30:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:36.206 14:30:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:36.206 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.206 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.206 [2024-04-17 14:30:44.779601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:36.206 [2024-04-17 14:30:44.779663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.206 [2024-04-17 14:30:44.779687] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec6bc0 00:11:36.206 [2024-04-17 14:30:44.779697] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.206 [2024-04-17 14:30:44.781147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.206 [2024-04-17 14:30:44.781185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:36.206 Passthru0 00:11:36.206 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.206 14:30:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:36.206 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.206 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.465 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.465 14:30:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:36.465 { 00:11:36.465 "name": "Malloc2", 00:11:36.465 "aliases": [ 00:11:36.465 "a66301e5-0b62-4433-b74b-70d9d3446dca" 00:11:36.465 ], 00:11:36.465 "product_name": "Malloc disk", 00:11:36.465 "block_size": 512, 00:11:36.465 "num_blocks": 16384, 00:11:36.465 "uuid": "a66301e5-0b62-4433-b74b-70d9d3446dca", 00:11:36.465 "assigned_rate_limits": { 00:11:36.465 "rw_ios_per_sec": 0, 00:11:36.465 "rw_mbytes_per_sec": 0, 00:11:36.465 "r_mbytes_per_sec": 0, 00:11:36.465 "w_mbytes_per_sec": 0 00:11:36.465 }, 00:11:36.465 "claimed": true, 00:11:36.465 "claim_type": "exclusive_write", 00:11:36.465 "zoned": false, 00:11:36.465 "supported_io_types": { 00:11:36.465 "read": true, 00:11:36.465 "write": true, 00:11:36.465 "unmap": true, 00:11:36.465 "write_zeroes": true, 00:11:36.465 "flush": true, 00:11:36.465 "reset": true, 00:11:36.465 "compare": false, 00:11:36.465 "compare_and_write": false, 00:11:36.465 "abort": true, 00:11:36.465 "nvme_admin": false, 00:11:36.465 "nvme_io": false 00:11:36.465 }, 00:11:36.465 "memory_domains": [ 00:11:36.465 { 00:11:36.465 "dma_device_id": "system", 00:11:36.465 "dma_device_type": 1 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.465 "dma_device_type": 2 00:11:36.465 } 00:11:36.465 ], 00:11:36.465 "driver_specific": {} 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "name": "Passthru0", 00:11:36.465 "aliases": [ 00:11:36.465 "29b6bb17-0cc4-51cd-80ef-0988ef7e9c5d" 00:11:36.465 ], 00:11:36.465 "product_name": "passthru", 00:11:36.465 "block_size": 512, 00:11:36.465 "num_blocks": 16384, 00:11:36.465 "uuid": "29b6bb17-0cc4-51cd-80ef-0988ef7e9c5d", 00:11:36.465 "assigned_rate_limits": { 00:11:36.465 "rw_ios_per_sec": 0, 00:11:36.465 "rw_mbytes_per_sec": 0, 00:11:36.465 "r_mbytes_per_sec": 0, 00:11:36.465 "w_mbytes_per_sec": 0 00:11:36.465 }, 00:11:36.465 "claimed": false, 00:11:36.465 "zoned": false, 00:11:36.465 "supported_io_types": { 00:11:36.465 "read": true, 00:11:36.465 "write": true, 00:11:36.465 "unmap": true, 00:11:36.465 "write_zeroes": true, 00:11:36.465 "flush": true, 00:11:36.465 "reset": true, 00:11:36.465 "compare": false, 00:11:36.465 "compare_and_write": false, 00:11:36.465 "abort": true, 00:11:36.465 "nvme_admin": false, 00:11:36.465 "nvme_io": false 00:11:36.465 }, 00:11:36.465 "memory_domains": [ 00:11:36.465 { 00:11:36.465 "dma_device_id": "system", 00:11:36.465 "dma_device_type": 1 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.465 "dma_device_type": 2 00:11:36.465 } 00:11:36.465 ], 00:11:36.465 "driver_specific": { 00:11:36.465 "passthru": { 00:11:36.465 "name": "Passthru0", 00:11:36.466 "base_bdev_name": "Malloc2" 00:11:36.466 } 00:11:36.466 } 00:11:36.466 } 00:11:36.466 ]' 00:11:36.466 14:30:44 -- rpc/rpc.sh@21 -- # jq length 00:11:36.466 14:30:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:36.466 14:30:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:36.466 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.466 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.466 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.466 14:30:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:36.466 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.466 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.466 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.466 14:30:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:36.466 14:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.466 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.466 14:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.466 14:30:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:36.466 14:30:44 -- rpc/rpc.sh@26 -- # jq length 00:11:36.466 ************************************ 00:11:36.466 END TEST rpc_daemon_integrity 00:11:36.466 ************************************ 00:11:36.466 14:30:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:36.466 00:11:36.466 real 0m0.307s 00:11:36.466 user 0m0.197s 00:11:36.466 sys 0m0.035s 00:11:36.466 14:30:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:36.466 14:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.466 14:30:44 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:36.466 14:30:44 -- rpc/rpc.sh@84 -- # killprocess 58440 00:11:36.466 14:30:44 -- common/autotest_common.sh@936 -- # '[' -z 58440 ']' 00:11:36.466 14:30:44 -- common/autotest_common.sh@940 -- # kill -0 58440 00:11:36.466 14:30:44 -- common/autotest_common.sh@941 -- # uname 00:11:36.466 14:30:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:36.466 14:30:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58440 00:11:36.466 killing process with pid 58440 00:11:36.466 14:30:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:36.466 14:30:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:36.466 14:30:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58440' 00:11:36.466 14:30:44 -- common/autotest_common.sh@955 -- # kill 58440 00:11:36.466 14:30:44 -- common/autotest_common.sh@960 -- # wait 58440 00:11:36.724 00:11:36.724 real 0m2.307s 00:11:36.724 user 0m3.247s 00:11:36.724 sys 0m0.542s 00:11:36.724 14:30:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:36.724 ************************************ 00:11:36.724 END TEST rpc 00:11:36.724 ************************************ 00:11:36.724 14:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:36.724 14:30:45 -- spdk/autotest.sh@166 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:36.725 14:30:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:36.725 14:30:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.725 14:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:36.982 ************************************ 00:11:36.982 START TEST rpc_client 00:11:36.982 ************************************ 00:11:36.982 14:30:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:36.982 * Looking for test storage... 00:11:36.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:36.982 14:30:45 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:36.982 OK 00:11:36.982 14:30:45 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:36.982 ************************************ 00:11:36.982 END TEST rpc_client 00:11:36.982 ************************************ 00:11:36.982 00:11:36.982 real 0m0.098s 00:11:36.982 user 0m0.042s 00:11:36.982 sys 0m0.062s 00:11:36.982 14:30:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:36.982 14:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:36.982 14:30:45 -- spdk/autotest.sh@167 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:36.982 14:30:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:36.982 14:30:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.982 14:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:36.982 ************************************ 00:11:36.982 START TEST json_config 00:11:36.982 ************************************ 00:11:36.982 14:30:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:37.257 14:30:45 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.257 14:30:45 -- nvmf/common.sh@7 -- # uname -s 00:11:37.257 14:30:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.257 14:30:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.257 14:30:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.257 14:30:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.257 14:30:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.257 14:30:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.257 14:30:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.257 14:30:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.257 14:30:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.257 14:30:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.257 14:30:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:11:37.257 14:30:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:11:37.257 14:30:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.257 14:30:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.257 14:30:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:37.257 14:30:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.257 14:30:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.257 14:30:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.257 14:30:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.257 14:30:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.257 14:30:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.257 14:30:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.257 14:30:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.257 14:30:45 -- paths/export.sh@5 -- # export PATH 00:11:37.257 14:30:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.257 14:30:45 -- nvmf/common.sh@47 -- # : 0 00:11:37.257 14:30:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.257 14:30:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.257 14:30:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.257 14:30:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.257 14:30:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.257 14:30:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.257 14:30:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.257 14:30:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.257 14:30:45 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:37.257 14:30:45 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:37.257 14:30:45 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:37.257 14:30:45 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:37.257 14:30:45 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:37.257 14:30:45 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:11:37.257 14:30:45 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:11:37.257 14:30:45 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:37.257 14:30:45 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:11:37.257 14:30:45 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:37.257 14:30:45 -- json_config/json_config.sh@33 -- # declare -A app_params 00:11:37.257 14:30:45 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:37.257 14:30:45 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:11:37.257 14:30:45 -- json_config/json_config.sh@40 -- # last_event_id=0 00:11:37.257 14:30:45 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:37.257 14:30:45 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:11:37.257 INFO: JSON configuration test init 00:11:37.257 14:30:45 -- json_config/json_config.sh@357 -- # json_config_test_init 00:11:37.257 14:30:45 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:11:37.257 14:30:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:37.257 14:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:37.257 14:30:45 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:11:37.257 14:30:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:37.257 14:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:37.257 14:30:45 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:11:37.257 14:30:45 -- json_config/common.sh@9 -- # local app=target 00:11:37.257 14:30:45 -- json_config/common.sh@10 -- # shift 00:11:37.257 14:30:45 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:37.257 Waiting for target to run... 00:11:37.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:37.257 14:30:45 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:37.257 14:30:45 -- json_config/common.sh@15 -- # local app_extra_params= 00:11:37.257 14:30:45 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:37.257 14:30:45 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:37.257 14:30:45 -- json_config/common.sh@22 -- # app_pid["$app"]=58694 00:11:37.257 14:30:45 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:37.257 14:30:45 -- json_config/common.sh@25 -- # waitforlisten 58694 /var/tmp/spdk_tgt.sock 00:11:37.257 14:30:45 -- common/autotest_common.sh@817 -- # '[' -z 58694 ']' 00:11:37.257 14:30:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:37.257 14:30:45 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:37.257 14:30:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:37.257 14:30:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:37.257 14:30:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:37.257 14:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:37.257 [2024-04-17 14:30:45.721409] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:37.257 [2024-04-17 14:30:45.721489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58694 ] 00:11:37.516 [2024-04-17 14:30:46.014387] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.516 [2024-04-17 14:30:46.059781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.450 00:11:38.450 14:30:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:38.450 14:30:46 -- common/autotest_common.sh@850 -- # return 0 00:11:38.450 14:30:46 -- json_config/common.sh@26 -- # echo '' 00:11:38.450 14:30:46 -- json_config/json_config.sh@269 -- # create_accel_config 00:11:38.450 14:30:46 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:11:38.450 14:30:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:38.450 14:30:46 -- common/autotest_common.sh@10 -- # set +x 00:11:38.450 14:30:46 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:11:38.450 14:30:46 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:11:38.450 14:30:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:38.450 14:30:46 -- common/autotest_common.sh@10 -- # set +x 00:11:38.450 14:30:46 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:38.450 14:30:46 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:11:38.450 14:30:46 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:39.016 14:30:47 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:11:39.016 14:30:47 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:39.016 14:30:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:39.016 14:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:39.016 14:30:47 -- json_config/json_config.sh@45 -- # local ret=0 00:11:39.016 14:30:47 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:39.016 14:30:47 -- json_config/json_config.sh@46 -- # local enabled_types 00:11:39.016 14:30:47 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:11:39.016 14:30:47 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:39.016 14:30:47 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:11:39.016 14:30:47 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:11:39.016 14:30:47 -- json_config/json_config.sh@48 -- # local get_types 00:11:39.016 14:30:47 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:11:39.016 14:30:47 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:11:39.016 14:30:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:39.016 14:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:39.275 14:30:47 -- json_config/json_config.sh@55 -- # return 0 00:11:39.275 14:30:47 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:11:39.275 14:30:47 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:11:39.275 14:30:47 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:11:39.275 14:30:47 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:11:39.275 14:30:47 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:11:39.275 14:30:47 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:11:39.275 14:30:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:39.275 14:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:39.275 14:30:47 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:11:39.275 14:30:47 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:11:39.275 14:30:47 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:11:39.275 14:30:47 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:39.275 14:30:47 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:39.533 MallocForNvmf0 00:11:39.533 14:30:47 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:39.533 14:30:47 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:39.791 MallocForNvmf1 00:11:39.791 14:30:48 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:11:39.791 14:30:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:11:40.049 [2024-04-17 14:30:48.556556] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.049 14:30:48 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:40.049 14:30:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:40.308 14:30:48 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:40.308 14:30:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:40.566 14:30:49 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:40.566 14:30:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:40.824 14:30:49 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:40.824 14:30:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:41.082 [2024-04-17 14:30:49.513086] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:41.082 14:30:49 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:11:41.082 14:30:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:41.082 14:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:41.082 14:30:49 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:11:41.082 14:30:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:41.082 14:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:41.082 14:30:49 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:11:41.082 14:30:49 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:41.082 14:30:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:41.340 MallocBdevForConfigChangeCheck 00:11:41.340 14:30:49 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:11:41.340 14:30:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:41.340 14:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:41.340 14:30:49 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:11:41.340 14:30:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:41.908 INFO: shutting down applications... 00:11:41.908 14:30:50 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:11:41.908 14:30:50 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:11:41.908 14:30:50 -- json_config/json_config.sh@368 -- # json_config_clear target 00:11:41.908 14:30:50 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:11:41.908 14:30:50 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:42.166 Calling clear_iscsi_subsystem 00:11:42.166 Calling clear_nvmf_subsystem 00:11:42.166 Calling clear_nbd_subsystem 00:11:42.166 Calling clear_ublk_subsystem 00:11:42.166 Calling clear_vhost_blk_subsystem 00:11:42.166 Calling clear_vhost_scsi_subsystem 00:11:42.166 Calling clear_bdev_subsystem 00:11:42.166 14:30:50 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:42.166 14:30:50 -- json_config/json_config.sh@343 -- # count=100 00:11:42.166 14:30:50 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:11:42.166 14:30:50 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:42.166 14:30:50 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:42.166 14:30:50 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:42.732 14:30:51 -- json_config/json_config.sh@345 -- # break 00:11:42.732 14:30:51 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:11:42.732 14:30:51 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:11:42.732 14:30:51 -- json_config/common.sh@31 -- # local app=target 00:11:42.732 14:30:51 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:42.732 14:30:51 -- json_config/common.sh@35 -- # [[ -n 58694 ]] 00:11:42.732 14:30:51 -- json_config/common.sh@38 -- # kill -SIGINT 58694 00:11:42.732 14:30:51 -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:42.732 14:30:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:42.732 14:30:51 -- json_config/common.sh@41 -- # kill -0 58694 00:11:42.732 14:30:51 -- json_config/common.sh@45 -- # sleep 0.5 00:11:42.992 14:30:51 -- json_config/common.sh@40 -- # (( i++ )) 00:11:42.992 14:30:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:42.992 14:30:51 -- json_config/common.sh@41 -- # kill -0 58694 00:11:42.992 14:30:51 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:42.992 SPDK target shutdown done 00:11:42.992 INFO: relaunching applications... 00:11:42.992 14:30:51 -- json_config/common.sh@43 -- # break 00:11:42.992 14:30:51 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:42.992 14:30:51 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:42.992 14:30:51 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:11:42.992 14:30:51 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:42.992 14:30:51 -- json_config/common.sh@9 -- # local app=target 00:11:42.992 14:30:51 -- json_config/common.sh@10 -- # shift 00:11:42.992 14:30:51 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:42.992 14:30:51 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:42.992 14:30:51 -- json_config/common.sh@15 -- # local app_extra_params= 00:11:42.992 14:30:51 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:42.992 14:30:51 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:43.250 14:30:51 -- json_config/common.sh@22 -- # app_pid["$app"]=58890 00:11:43.250 14:30:51 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:43.250 Waiting for target to run... 00:11:43.250 14:30:51 -- json_config/common.sh@25 -- # waitforlisten 58890 /var/tmp/spdk_tgt.sock 00:11:43.250 14:30:51 -- common/autotest_common.sh@817 -- # '[' -z 58890 ']' 00:11:43.250 14:30:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:43.250 14:30:51 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:43.250 14:30:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:43.250 14:30:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:43.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:43.250 14:30:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:43.250 14:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:43.250 [2024-04-17 14:30:51.649628] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:43.250 [2024-04-17 14:30:51.650094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58890 ] 00:11:43.508 [2024-04-17 14:30:51.987272] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.508 [2024-04-17 14:30:52.032270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.766 [2024-04-17 14:30:52.330728] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.766 [2024-04-17 14:30:52.362810] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:44.023 00:11:44.023 INFO: Checking if target configuration is the same... 00:11:44.023 14:30:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:44.023 14:30:52 -- common/autotest_common.sh@850 -- # return 0 00:11:44.023 14:30:52 -- json_config/common.sh@26 -- # echo '' 00:11:44.023 14:30:52 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:11:44.023 14:30:52 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:44.023 14:30:52 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:44.023 14:30:52 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:11:44.023 14:30:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:44.023 + '[' 2 -ne 2 ']' 00:11:44.023 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:44.023 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:44.023 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:44.023 +++ basename /dev/fd/62 00:11:44.023 ++ mktemp /tmp/62.XXX 00:11:44.023 + tmp_file_1=/tmp/62.bBV 00:11:44.023 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:44.023 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:44.023 + tmp_file_2=/tmp/spdk_tgt_config.json.5SS 00:11:44.023 + ret=0 00:11:44.023 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:44.588 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:44.588 + diff -u /tmp/62.bBV /tmp/spdk_tgt_config.json.5SS 00:11:44.588 INFO: JSON config files are the same 00:11:44.588 + echo 'INFO: JSON config files are the same' 00:11:44.588 + rm /tmp/62.bBV /tmp/spdk_tgt_config.json.5SS 00:11:44.588 + exit 0 00:11:44.588 INFO: changing configuration and checking if this can be detected... 00:11:44.588 14:30:53 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:11:44.588 14:30:53 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:44.588 14:30:53 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:44.588 14:30:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:44.885 14:30:53 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:44.885 14:30:53 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:11:44.885 14:30:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:44.885 + '[' 2 -ne 2 ']' 00:11:44.885 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:44.885 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:44.885 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:44.885 +++ basename /dev/fd/62 00:11:44.885 ++ mktemp /tmp/62.XXX 00:11:44.885 + tmp_file_1=/tmp/62.UVo 00:11:44.885 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:44.885 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:44.885 + tmp_file_2=/tmp/spdk_tgt_config.json.ukX 00:11:44.885 + ret=0 00:11:44.885 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:45.143 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:45.402 + diff -u /tmp/62.UVo /tmp/spdk_tgt_config.json.ukX 00:11:45.402 + ret=1 00:11:45.402 + echo '=== Start of file: /tmp/62.UVo ===' 00:11:45.402 + cat /tmp/62.UVo 00:11:45.402 + echo '=== End of file: /tmp/62.UVo ===' 00:11:45.402 + echo '' 00:11:45.402 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ukX ===' 00:11:45.402 + cat /tmp/spdk_tgt_config.json.ukX 00:11:45.402 + echo '=== End of file: /tmp/spdk_tgt_config.json.ukX ===' 00:11:45.402 + echo '' 00:11:45.402 + rm /tmp/62.UVo /tmp/spdk_tgt_config.json.ukX 00:11:45.402 + exit 1 00:11:45.402 INFO: configuration change detected. 00:11:45.402 14:30:53 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:11:45.402 14:30:53 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:11:45.402 14:30:53 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:11:45.402 14:30:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:45.402 14:30:53 -- common/autotest_common.sh@10 -- # set +x 00:11:45.402 14:30:53 -- json_config/json_config.sh@307 -- # local ret=0 00:11:45.402 14:30:53 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:11:45.402 14:30:53 -- json_config/json_config.sh@317 -- # [[ -n 58890 ]] 00:11:45.402 14:30:53 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:11:45.402 14:30:53 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:11:45.402 14:30:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:45.402 14:30:53 -- common/autotest_common.sh@10 -- # set +x 00:11:45.402 14:30:53 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:11:45.402 14:30:53 -- json_config/json_config.sh@193 -- # uname -s 00:11:45.402 14:30:53 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:11:45.402 14:30:53 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:11:45.402 14:30:53 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:11:45.402 14:30:53 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:11:45.402 14:30:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:45.402 14:30:53 -- common/autotest_common.sh@10 -- # set +x 00:11:45.402 14:30:53 -- json_config/json_config.sh@323 -- # killprocess 58890 00:11:45.402 14:30:53 -- common/autotest_common.sh@936 -- # '[' -z 58890 ']' 00:11:45.402 14:30:53 -- common/autotest_common.sh@940 -- # kill -0 58890 00:11:45.402 14:30:53 -- common/autotest_common.sh@941 -- # uname 00:11:45.402 14:30:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:45.402 14:30:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58890 00:11:45.402 killing process with pid 58890 00:11:45.402 14:30:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:45.402 14:30:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:45.402 14:30:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58890' 00:11:45.402 14:30:53 -- common/autotest_common.sh@955 -- # kill 58890 00:11:45.402 14:30:53 -- common/autotest_common.sh@960 -- # wait 58890 00:11:45.660 14:30:54 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:45.660 14:30:54 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:11:45.660 14:30:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:45.660 14:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:45.660 INFO: Success 00:11:45.660 14:30:54 -- json_config/json_config.sh@328 -- # return 0 00:11:45.660 14:30:54 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:11:45.660 00:11:45.660 real 0m8.560s 00:11:45.660 user 0m12.637s 00:11:45.660 sys 0m1.428s 00:11:45.660 ************************************ 00:11:45.660 END TEST json_config 00:11:45.660 ************************************ 00:11:45.660 14:30:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:45.660 14:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:45.660 14:30:54 -- spdk/autotest.sh@168 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:45.660 14:30:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:45.660 14:30:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.660 14:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:45.660 ************************************ 00:11:45.660 START TEST json_config_extra_key 00:11:45.660 ************************************ 00:11:45.660 14:30:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.918 14:30:54 -- nvmf/common.sh@7 -- # uname -s 00:11:45.918 14:30:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.918 14:30:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.918 14:30:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.918 14:30:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.918 14:30:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.918 14:30:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.918 14:30:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.918 14:30:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.918 14:30:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.918 14:30:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.918 14:30:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:11:45.918 14:30:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:11:45.918 14:30:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.918 14:30:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.918 14:30:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:45.918 14:30:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.918 14:30:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.918 14:30:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.918 14:30:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.918 14:30:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.918 14:30:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.918 14:30:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.918 14:30:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.918 14:30:54 -- paths/export.sh@5 -- # export PATH 00:11:45.918 14:30:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.918 14:30:54 -- nvmf/common.sh@47 -- # : 0 00:11:45.918 14:30:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.918 14:30:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.918 14:30:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.918 14:30:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.918 14:30:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.918 14:30:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.918 14:30:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.918 14:30:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:45.918 INFO: launching applications... 00:11:45.918 14:30:54 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:45.918 14:30:54 -- json_config/common.sh@9 -- # local app=target 00:11:45.918 14:30:54 -- json_config/common.sh@10 -- # shift 00:11:45.918 14:30:54 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:45.918 14:30:54 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:45.918 14:30:54 -- json_config/common.sh@15 -- # local app_extra_params= 00:11:45.918 14:30:54 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:45.918 14:30:54 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:45.918 14:30:54 -- json_config/common.sh@22 -- # app_pid["$app"]=59041 00:11:45.918 14:30:54 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:45.918 14:30:54 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:45.918 Waiting for target to run... 00:11:45.918 14:30:54 -- json_config/common.sh@25 -- # waitforlisten 59041 /var/tmp/spdk_tgt.sock 00:11:45.918 14:30:54 -- common/autotest_common.sh@817 -- # '[' -z 59041 ']' 00:11:45.918 14:30:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:45.918 14:30:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:45.918 14:30:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:45.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:45.918 14:30:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:45.918 14:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:45.918 [2024-04-17 14:30:54.377586] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:45.918 [2024-04-17 14:30:54.377989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:11:46.177 [2024-04-17 14:30:54.676697] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.177 [2024-04-17 14:30:54.720499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.110 00:11:47.110 INFO: shutting down applications... 00:11:47.110 14:30:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:47.110 14:30:55 -- common/autotest_common.sh@850 -- # return 0 00:11:47.110 14:30:55 -- json_config/common.sh@26 -- # echo '' 00:11:47.110 14:30:55 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:47.110 14:30:55 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:47.110 14:30:55 -- json_config/common.sh@31 -- # local app=target 00:11:47.110 14:30:55 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:47.110 14:30:55 -- json_config/common.sh@35 -- # [[ -n 59041 ]] 00:11:47.110 14:30:55 -- json_config/common.sh@38 -- # kill -SIGINT 59041 00:11:47.110 14:30:55 -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:47.110 14:30:55 -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:47.110 14:30:55 -- json_config/common.sh@41 -- # kill -0 59041 00:11:47.110 14:30:55 -- json_config/common.sh@45 -- # sleep 0.5 00:11:47.368 14:30:55 -- json_config/common.sh@40 -- # (( i++ )) 00:11:47.368 14:30:55 -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:47.368 14:30:55 -- json_config/common.sh@41 -- # kill -0 59041 00:11:47.368 14:30:55 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:47.368 14:30:55 -- json_config/common.sh@43 -- # break 00:11:47.368 SPDK target shutdown done 00:11:47.368 Success 00:11:47.368 14:30:55 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:47.368 14:30:55 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:47.368 14:30:55 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:47.368 00:11:47.368 real 0m1.714s 00:11:47.368 user 0m1.710s 00:11:47.368 sys 0m0.309s 00:11:47.368 ************************************ 00:11:47.368 END TEST json_config_extra_key 00:11:47.368 ************************************ 00:11:47.368 14:30:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:47.368 14:30:55 -- common/autotest_common.sh@10 -- # set +x 00:11:47.626 14:30:55 -- spdk/autotest.sh@169 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:47.626 14:30:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:47.626 14:30:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.626 14:30:55 -- common/autotest_common.sh@10 -- # set +x 00:11:47.626 ************************************ 00:11:47.626 START TEST alias_rpc 00:11:47.626 ************************************ 00:11:47.626 14:30:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:47.626 * Looking for test storage... 00:11:47.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:47.626 14:30:56 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:47.626 14:30:56 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59116 00:11:47.626 14:30:56 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:47.626 14:30:56 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59116 00:11:47.626 14:30:56 -- common/autotest_common.sh@817 -- # '[' -z 59116 ']' 00:11:47.626 14:30:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.626 14:30:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:47.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.626 14:30:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.626 14:30:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:47.626 14:30:56 -- common/autotest_common.sh@10 -- # set +x 00:11:47.626 [2024-04-17 14:30:56.188757] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:47.626 [2024-04-17 14:30:56.189687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:11:47.884 [2024-04-17 14:30:56.348842] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.885 [2024-04-17 14:30:56.429259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.818 14:30:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:48.818 14:30:57 -- common/autotest_common.sh@850 -- # return 0 00:11:48.818 14:30:57 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:49.076 14:30:57 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59116 00:11:49.076 14:30:57 -- common/autotest_common.sh@936 -- # '[' -z 59116 ']' 00:11:49.076 14:30:57 -- common/autotest_common.sh@940 -- # kill -0 59116 00:11:49.076 14:30:57 -- common/autotest_common.sh@941 -- # uname 00:11:49.076 14:30:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:49.076 14:30:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59116 00:11:49.076 killing process with pid 59116 00:11:49.076 14:30:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:49.076 14:30:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:49.076 14:30:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59116' 00:11:49.076 14:30:57 -- common/autotest_common.sh@955 -- # kill 59116 00:11:49.076 14:30:57 -- common/autotest_common.sh@960 -- # wait 59116 00:11:49.334 ************************************ 00:11:49.334 END TEST alias_rpc 00:11:49.334 ************************************ 00:11:49.334 00:11:49.334 real 0m1.869s 00:11:49.334 user 0m2.387s 00:11:49.334 sys 0m0.329s 00:11:49.334 14:30:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:49.334 14:30:57 -- common/autotest_common.sh@10 -- # set +x 00:11:49.592 14:30:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 0 ]] 00:11:49.592 14:30:57 -- spdk/autotest.sh@172 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:49.592 14:30:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:49.592 14:30:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:49.592 14:30:57 -- common/autotest_common.sh@10 -- # set +x 00:11:49.592 ************************************ 00:11:49.592 START TEST spdkcli_tcp 00:11:49.592 ************************************ 00:11:49.592 14:30:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:49.592 * Looking for test storage... 00:11:49.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:49.592 14:30:58 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:49.592 14:30:58 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:49.592 14:30:58 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:49.592 14:30:58 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:49.592 14:30:58 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:49.592 14:30:58 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:49.592 14:30:58 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:49.592 14:30:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:49.592 14:30:58 -- common/autotest_common.sh@10 -- # set +x 00:11:49.592 14:30:58 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59192 00:11:49.592 14:30:58 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:49.592 14:30:58 -- spdkcli/tcp.sh@27 -- # waitforlisten 59192 00:11:49.592 14:30:58 -- common/autotest_common.sh@817 -- # '[' -z 59192 ']' 00:11:49.592 14:30:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.592 14:30:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:49.592 14:30:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.592 14:30:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:49.592 14:30:58 -- common/autotest_common.sh@10 -- # set +x 00:11:49.592 [2024-04-17 14:30:58.150485] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:49.592 [2024-04-17 14:30:58.150807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59192 ] 00:11:49.851 [2024-04-17 14:30:58.282064] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:49.851 [2024-04-17 14:30:58.342202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.851 [2024-04-17 14:30:58.342212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.108 14:30:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:50.108 14:30:58 -- common/autotest_common.sh@850 -- # return 0 00:11:50.108 14:30:58 -- spdkcli/tcp.sh@31 -- # socat_pid=59201 00:11:50.108 14:30:58 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:50.108 14:30:58 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:50.367 [ 00:11:50.367 "bdev_malloc_delete", 00:11:50.367 "bdev_malloc_create", 00:11:50.367 "bdev_null_resize", 00:11:50.367 "bdev_null_delete", 00:11:50.367 "bdev_null_create", 00:11:50.367 "bdev_nvme_cuse_unregister", 00:11:50.367 "bdev_nvme_cuse_register", 00:11:50.367 "bdev_opal_new_user", 00:11:50.367 "bdev_opal_set_lock_state", 00:11:50.367 "bdev_opal_delete", 00:11:50.367 "bdev_opal_get_info", 00:11:50.367 "bdev_opal_create", 00:11:50.367 "bdev_nvme_opal_revert", 00:11:50.367 "bdev_nvme_opal_init", 00:11:50.367 "bdev_nvme_send_cmd", 00:11:50.367 "bdev_nvme_get_path_iostat", 00:11:50.367 "bdev_nvme_get_mdns_discovery_info", 00:11:50.367 "bdev_nvme_stop_mdns_discovery", 00:11:50.367 "bdev_nvme_start_mdns_discovery", 00:11:50.367 "bdev_nvme_set_multipath_policy", 00:11:50.367 "bdev_nvme_set_preferred_path", 00:11:50.367 "bdev_nvme_get_io_paths", 00:11:50.367 "bdev_nvme_remove_error_injection", 00:11:50.367 "bdev_nvme_add_error_injection", 00:11:50.367 "bdev_nvme_get_discovery_info", 00:11:50.367 "bdev_nvme_stop_discovery", 00:11:50.367 "bdev_nvme_start_discovery", 00:11:50.367 "bdev_nvme_get_controller_health_info", 00:11:50.367 "bdev_nvme_disable_controller", 00:11:50.367 "bdev_nvme_enable_controller", 00:11:50.367 "bdev_nvme_reset_controller", 00:11:50.367 "bdev_nvme_get_transport_statistics", 00:11:50.367 "bdev_nvme_apply_firmware", 00:11:50.368 "bdev_nvme_detach_controller", 00:11:50.368 "bdev_nvme_get_controllers", 00:11:50.368 "bdev_nvme_attach_controller", 00:11:50.368 "bdev_nvme_set_hotplug", 00:11:50.368 "bdev_nvme_set_options", 00:11:50.368 "bdev_passthru_delete", 00:11:50.368 "bdev_passthru_create", 00:11:50.368 "bdev_lvol_grow_lvstore", 00:11:50.368 "bdev_lvol_get_lvols", 00:11:50.368 "bdev_lvol_get_lvstores", 00:11:50.368 "bdev_lvol_delete", 00:11:50.368 "bdev_lvol_set_read_only", 00:11:50.368 "bdev_lvol_resize", 00:11:50.368 "bdev_lvol_decouple_parent", 00:11:50.368 "bdev_lvol_inflate", 00:11:50.368 "bdev_lvol_rename", 00:11:50.368 "bdev_lvol_clone_bdev", 00:11:50.368 "bdev_lvol_clone", 00:11:50.368 "bdev_lvol_snapshot", 00:11:50.368 "bdev_lvol_create", 00:11:50.368 "bdev_lvol_delete_lvstore", 00:11:50.368 "bdev_lvol_rename_lvstore", 00:11:50.368 "bdev_lvol_create_lvstore", 00:11:50.368 "bdev_raid_set_options", 00:11:50.368 "bdev_raid_remove_base_bdev", 00:11:50.368 "bdev_raid_add_base_bdev", 00:11:50.368 "bdev_raid_delete", 00:11:50.368 "bdev_raid_create", 00:11:50.368 "bdev_raid_get_bdevs", 00:11:50.368 "bdev_error_inject_error", 00:11:50.368 "bdev_error_delete", 00:11:50.368 "bdev_error_create", 00:11:50.368 "bdev_split_delete", 00:11:50.368 "bdev_split_create", 00:11:50.368 "bdev_delay_delete", 00:11:50.368 "bdev_delay_create", 00:11:50.368 "bdev_delay_update_latency", 00:11:50.368 "bdev_zone_block_delete", 00:11:50.368 "bdev_zone_block_create", 00:11:50.368 "blobfs_create", 00:11:50.368 "blobfs_detect", 00:11:50.368 "blobfs_set_cache_size", 00:11:50.368 "bdev_aio_delete", 00:11:50.368 "bdev_aio_rescan", 00:11:50.368 "bdev_aio_create", 00:11:50.368 "bdev_ftl_set_property", 00:11:50.368 "bdev_ftl_get_properties", 00:11:50.368 "bdev_ftl_get_stats", 00:11:50.368 "bdev_ftl_unmap", 00:11:50.368 "bdev_ftl_unload", 00:11:50.368 "bdev_ftl_delete", 00:11:50.368 "bdev_ftl_load", 00:11:50.368 "bdev_ftl_create", 00:11:50.368 "bdev_virtio_attach_controller", 00:11:50.368 "bdev_virtio_scsi_get_devices", 00:11:50.368 "bdev_virtio_detach_controller", 00:11:50.368 "bdev_virtio_blk_set_hotplug", 00:11:50.368 "bdev_iscsi_delete", 00:11:50.368 "bdev_iscsi_create", 00:11:50.368 "bdev_iscsi_set_options", 00:11:50.368 "bdev_uring_delete", 00:11:50.368 "bdev_uring_rescan", 00:11:50.368 "bdev_uring_create", 00:11:50.368 "accel_error_inject_error", 00:11:50.368 "ioat_scan_accel_module", 00:11:50.368 "dsa_scan_accel_module", 00:11:50.368 "iaa_scan_accel_module", 00:11:50.368 "keyring_file_remove_key", 00:11:50.368 "keyring_file_add_key", 00:11:50.368 "iscsi_set_options", 00:11:50.368 "iscsi_get_auth_groups", 00:11:50.368 "iscsi_auth_group_remove_secret", 00:11:50.368 "iscsi_auth_group_add_secret", 00:11:50.368 "iscsi_delete_auth_group", 00:11:50.368 "iscsi_create_auth_group", 00:11:50.368 "iscsi_set_discovery_auth", 00:11:50.368 "iscsi_get_options", 00:11:50.368 "iscsi_target_node_request_logout", 00:11:50.368 "iscsi_target_node_set_redirect", 00:11:50.368 "iscsi_target_node_set_auth", 00:11:50.368 "iscsi_target_node_add_lun", 00:11:50.368 "iscsi_get_stats", 00:11:50.368 "iscsi_get_connections", 00:11:50.368 "iscsi_portal_group_set_auth", 00:11:50.368 "iscsi_start_portal_group", 00:11:50.368 "iscsi_delete_portal_group", 00:11:50.368 "iscsi_create_portal_group", 00:11:50.368 "iscsi_get_portal_groups", 00:11:50.368 "iscsi_delete_target_node", 00:11:50.368 "iscsi_target_node_remove_pg_ig_maps", 00:11:50.368 "iscsi_target_node_add_pg_ig_maps", 00:11:50.368 "iscsi_create_target_node", 00:11:50.368 "iscsi_get_target_nodes", 00:11:50.368 "iscsi_delete_initiator_group", 00:11:50.368 "iscsi_initiator_group_remove_initiators", 00:11:50.368 "iscsi_initiator_group_add_initiators", 00:11:50.368 "iscsi_create_initiator_group", 00:11:50.368 "iscsi_get_initiator_groups", 00:11:50.368 "nvmf_set_crdt", 00:11:50.368 "nvmf_set_config", 00:11:50.368 "nvmf_set_max_subsystems", 00:11:50.368 "nvmf_subsystem_get_listeners", 00:11:50.368 "nvmf_subsystem_get_qpairs", 00:11:50.368 "nvmf_subsystem_get_controllers", 00:11:50.368 "nvmf_get_stats", 00:11:50.368 "nvmf_get_transports", 00:11:50.368 "nvmf_create_transport", 00:11:50.368 "nvmf_get_targets", 00:11:50.368 "nvmf_delete_target", 00:11:50.368 "nvmf_create_target", 00:11:50.368 "nvmf_subsystem_allow_any_host", 00:11:50.368 "nvmf_subsystem_remove_host", 00:11:50.368 "nvmf_subsystem_add_host", 00:11:50.368 "nvmf_ns_remove_host", 00:11:50.368 "nvmf_ns_add_host", 00:11:50.368 "nvmf_subsystem_remove_ns", 00:11:50.368 "nvmf_subsystem_add_ns", 00:11:50.368 "nvmf_subsystem_listener_set_ana_state", 00:11:50.368 "nvmf_discovery_get_referrals", 00:11:50.368 "nvmf_discovery_remove_referral", 00:11:50.368 "nvmf_discovery_add_referral", 00:11:50.368 "nvmf_subsystem_remove_listener", 00:11:50.368 "nvmf_subsystem_add_listener", 00:11:50.368 "nvmf_delete_subsystem", 00:11:50.368 "nvmf_create_subsystem", 00:11:50.368 "nvmf_get_subsystems", 00:11:50.368 "env_dpdk_get_mem_stats", 00:11:50.368 "nbd_get_disks", 00:11:50.368 "nbd_stop_disk", 00:11:50.368 "nbd_start_disk", 00:11:50.368 "ublk_recover_disk", 00:11:50.368 "ublk_get_disks", 00:11:50.368 "ublk_stop_disk", 00:11:50.368 "ublk_start_disk", 00:11:50.368 "ublk_destroy_target", 00:11:50.368 "ublk_create_target", 00:11:50.368 "virtio_blk_create_transport", 00:11:50.368 "virtio_blk_get_transports", 00:11:50.368 "vhost_controller_set_coalescing", 00:11:50.368 "vhost_get_controllers", 00:11:50.368 "vhost_delete_controller", 00:11:50.368 "vhost_create_blk_controller", 00:11:50.368 "vhost_scsi_controller_remove_target", 00:11:50.368 "vhost_scsi_controller_add_target", 00:11:50.368 "vhost_start_scsi_controller", 00:11:50.368 "vhost_create_scsi_controller", 00:11:50.368 "thread_set_cpumask", 00:11:50.368 "framework_get_scheduler", 00:11:50.368 "framework_set_scheduler", 00:11:50.368 "framework_get_reactors", 00:11:50.368 "thread_get_io_channels", 00:11:50.368 "thread_get_pollers", 00:11:50.368 "thread_get_stats", 00:11:50.368 "framework_monitor_context_switch", 00:11:50.368 "spdk_kill_instance", 00:11:50.368 "log_enable_timestamps", 00:11:50.368 "log_get_flags", 00:11:50.368 "log_clear_flag", 00:11:50.368 "log_set_flag", 00:11:50.368 "log_get_level", 00:11:50.368 "log_set_level", 00:11:50.368 "log_get_print_level", 00:11:50.368 "log_set_print_level", 00:11:50.368 "framework_enable_cpumask_locks", 00:11:50.368 "framework_disable_cpumask_locks", 00:11:50.368 "framework_wait_init", 00:11:50.368 "framework_start_init", 00:11:50.368 "scsi_get_devices", 00:11:50.368 "bdev_get_histogram", 00:11:50.368 "bdev_enable_histogram", 00:11:50.368 "bdev_set_qos_limit", 00:11:50.368 "bdev_set_qd_sampling_period", 00:11:50.368 "bdev_get_bdevs", 00:11:50.368 "bdev_reset_iostat", 00:11:50.368 "bdev_get_iostat", 00:11:50.368 "bdev_examine", 00:11:50.368 "bdev_wait_for_examine", 00:11:50.368 "bdev_set_options", 00:11:50.368 "notify_get_notifications", 00:11:50.368 "notify_get_types", 00:11:50.368 "accel_get_stats", 00:11:50.368 "accel_set_options", 00:11:50.368 "accel_set_driver", 00:11:50.368 "accel_crypto_key_destroy", 00:11:50.368 "accel_crypto_keys_get", 00:11:50.368 "accel_crypto_key_create", 00:11:50.368 "accel_assign_opc", 00:11:50.368 "accel_get_module_info", 00:11:50.368 "accel_get_opc_assignments", 00:11:50.368 "vmd_rescan", 00:11:50.368 "vmd_remove_device", 00:11:50.368 "vmd_enable", 00:11:50.368 "sock_set_default_impl", 00:11:50.368 "sock_impl_set_options", 00:11:50.368 "sock_impl_get_options", 00:11:50.368 "iobuf_get_stats", 00:11:50.368 "iobuf_set_options", 00:11:50.368 "framework_get_pci_devices", 00:11:50.368 "framework_get_config", 00:11:50.368 "framework_get_subsystems", 00:11:50.368 "trace_get_info", 00:11:50.368 "trace_get_tpoint_group_mask", 00:11:50.368 "trace_disable_tpoint_group", 00:11:50.368 "trace_enable_tpoint_group", 00:11:50.368 "trace_clear_tpoint_mask", 00:11:50.368 "trace_set_tpoint_mask", 00:11:50.368 "keyring_get_keys", 00:11:50.368 "spdk_get_version", 00:11:50.368 "rpc_get_methods" 00:11:50.368 ] 00:11:50.368 14:30:58 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:50.368 14:30:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:50.368 14:30:58 -- common/autotest_common.sh@10 -- # set +x 00:11:50.368 14:30:58 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:50.368 14:30:58 -- spdkcli/tcp.sh@38 -- # killprocess 59192 00:11:50.368 14:30:58 -- common/autotest_common.sh@936 -- # '[' -z 59192 ']' 00:11:50.368 14:30:58 -- common/autotest_common.sh@940 -- # kill -0 59192 00:11:50.368 14:30:58 -- common/autotest_common.sh@941 -- # uname 00:11:50.368 14:30:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.368 14:30:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59192 00:11:50.368 killing process with pid 59192 00:11:50.368 14:30:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:50.368 14:30:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:50.368 14:30:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59192' 00:11:50.368 14:30:58 -- common/autotest_common.sh@955 -- # kill 59192 00:11:50.368 14:30:58 -- common/autotest_common.sh@960 -- # wait 59192 00:11:50.627 ************************************ 00:11:50.627 END TEST spdkcli_tcp 00:11:50.627 ************************************ 00:11:50.627 00:11:50.627 real 0m1.162s 00:11:50.627 user 0m2.192s 00:11:50.627 sys 0m0.298s 00:11:50.627 14:30:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:50.627 14:30:59 -- common/autotest_common.sh@10 -- # set +x 00:11:50.627 14:30:59 -- spdk/autotest.sh@175 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:50.627 14:30:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:50.627 14:30:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:50.627 14:30:59 -- common/autotest_common.sh@10 -- # set +x 00:11:50.885 ************************************ 00:11:50.885 START TEST dpdk_mem_utility 00:11:50.885 ************************************ 00:11:50.885 14:30:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:50.885 * Looking for test storage... 00:11:50.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:50.886 14:30:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:50.886 14:30:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59282 00:11:50.886 14:30:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59282 00:11:50.886 14:30:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.886 14:30:59 -- common/autotest_common.sh@817 -- # '[' -z 59282 ']' 00:11:50.886 14:30:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.886 14:30:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:50.886 14:30:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.886 14:30:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:50.886 14:30:59 -- common/autotest_common.sh@10 -- # set +x 00:11:50.886 [2024-04-17 14:30:59.441645] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:50.886 [2024-04-17 14:30:59.442014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:11:51.144 [2024-04-17 14:30:59.579011] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.144 [2024-04-17 14:30:59.640161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.080 14:31:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:52.080 14:31:00 -- common/autotest_common.sh@850 -- # return 0 00:11:52.080 14:31:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:52.080 14:31:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:52.080 14:31:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.080 14:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:52.080 { 00:11:52.080 "filename": "/tmp/spdk_mem_dump.txt" 00:11:52.080 } 00:11:52.080 14:31:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.080 14:31:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:52.080 DPDK memory size 814.000000 MiB in 1 heap(s) 00:11:52.080 1 heaps totaling size 814.000000 MiB 00:11:52.080 size: 814.000000 MiB heap id: 0 00:11:52.080 end heaps---------- 00:11:52.080 8 mempools totaling size 598.116089 MiB 00:11:52.080 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:52.080 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:52.080 size: 84.521057 MiB name: bdev_io_59282 00:11:52.080 size: 51.011292 MiB name: evtpool_59282 00:11:52.080 size: 50.003479 MiB name: msgpool_59282 00:11:52.080 size: 21.763794 MiB name: PDU_Pool 00:11:52.080 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:52.080 size: 0.026123 MiB name: Session_Pool 00:11:52.080 end mempools------- 00:11:52.080 6 memzones totaling size 4.142822 MiB 00:11:52.080 size: 1.000366 MiB name: RG_ring_0_59282 00:11:52.080 size: 1.000366 MiB name: RG_ring_1_59282 00:11:52.080 size: 1.000366 MiB name: RG_ring_4_59282 00:11:52.080 size: 1.000366 MiB name: RG_ring_5_59282 00:11:52.080 size: 0.125366 MiB name: RG_ring_2_59282 00:11:52.080 size: 0.015991 MiB name: RG_ring_3_59282 00:11:52.080 end memzones------- 00:11:52.080 14:31:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:52.080 heap id: 0 total size: 814.000000 MiB number of busy elements: 309 number of free elements: 15 00:11:52.080 list of free elements. size: 12.470276 MiB 00:11:52.081 element at address: 0x200000400000 with size: 1.999512 MiB 00:11:52.081 element at address: 0x200018e00000 with size: 0.999878 MiB 00:11:52.081 element at address: 0x200019000000 with size: 0.999878 MiB 00:11:52.081 element at address: 0x200003e00000 with size: 0.996277 MiB 00:11:52.081 element at address: 0x200031c00000 with size: 0.994446 MiB 00:11:52.081 element at address: 0x200013800000 with size: 0.978699 MiB 00:11:52.081 element at address: 0x200007000000 with size: 0.959839 MiB 00:11:52.081 element at address: 0x200019200000 with size: 0.936584 MiB 00:11:52.081 element at address: 0x200000200000 with size: 0.832825 MiB 00:11:52.081 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:11:52.081 element at address: 0x20000b200000 with size: 0.488892 MiB 00:11:52.081 element at address: 0x200000800000 with size: 0.486145 MiB 00:11:52.081 element at address: 0x200019400000 with size: 0.485657 MiB 00:11:52.081 element at address: 0x200027e00000 with size: 0.395752 MiB 00:11:52.081 element at address: 0x200003a00000 with size: 0.347839 MiB 00:11:52.081 list of standard malloc elements. size: 199.267151 MiB 00:11:52.081 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:11:52.081 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:11:52.081 element at address: 0x200018efff80 with size: 1.000122 MiB 00:11:52.081 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:11:52.081 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:52.081 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:52.081 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:11:52.081 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:52.081 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:11:52.081 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087c740 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087c800 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087c980 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59180 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59240 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59300 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59480 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59540 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59600 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59780 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59840 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59900 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003adb300 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003adb500 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003affa80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003affb40 with size: 0.000183 MiB 00:11:52.081 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:11:52.081 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:11:52.082 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e65500 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:11:52.082 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:11:52.083 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:11:52.083 list of memzone associated elements. size: 602.262573 MiB 00:11:52.083 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:11:52.083 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:52.083 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:11:52.083 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:52.083 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:11:52.083 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59282_0 00:11:52.083 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:11:52.083 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59282_0 00:11:52.083 element at address: 0x200003fff380 with size: 48.003052 MiB 00:11:52.083 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59282_0 00:11:52.083 element at address: 0x2000195be940 with size: 20.255554 MiB 00:11:52.083 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:52.083 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:11:52.083 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:52.083 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:11:52.083 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59282 00:11:52.083 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:11:52.083 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59282 00:11:52.083 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:52.083 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59282 00:11:52.083 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:11:52.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:52.083 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:11:52.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:52.083 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:11:52.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:52.083 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:11:52.083 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:52.083 element at address: 0x200003eff180 with size: 1.000488 MiB 00:11:52.083 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59282 00:11:52.083 element at address: 0x200003affc00 with size: 1.000488 MiB 00:11:52.083 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59282 00:11:52.083 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:11:52.083 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59282 00:11:52.083 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:11:52.083 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59282 00:11:52.083 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:11:52.083 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59282 00:11:52.083 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:11:52.083 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:52.083 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:11:52.083 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:52.083 element at address: 0x20001947c540 with size: 0.250488 MiB 00:11:52.083 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:52.083 element at address: 0x200003adf880 with size: 0.125488 MiB 00:11:52.083 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59282 00:11:52.083 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:11:52.083 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:52.083 element at address: 0x200027e65680 with size: 0.023743 MiB 00:11:52.083 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:52.083 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:11:52.083 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59282 00:11:52.083 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:11:52.083 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:52.083 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:11:52.083 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59282 00:11:52.083 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:11:52.083 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59282 00:11:52.083 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:11:52.083 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:52.083 14:31:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:52.083 14:31:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59282 00:11:52.083 14:31:00 -- common/autotest_common.sh@936 -- # '[' -z 59282 ']' 00:11:52.083 14:31:00 -- common/autotest_common.sh@940 -- # kill -0 59282 00:11:52.083 14:31:00 -- common/autotest_common.sh@941 -- # uname 00:11:52.083 14:31:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.083 14:31:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59282 00:11:52.083 killing process with pid 59282 00:11:52.083 14:31:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:52.083 14:31:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:52.083 14:31:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59282' 00:11:52.083 14:31:00 -- common/autotest_common.sh@955 -- # kill 59282 00:11:52.083 14:31:00 -- common/autotest_common.sh@960 -- # wait 59282 00:11:52.342 ************************************ 00:11:52.342 END TEST dpdk_mem_utility 00:11:52.342 ************************************ 00:11:52.342 00:11:52.342 real 0m1.582s 00:11:52.342 user 0m1.850s 00:11:52.342 sys 0m0.319s 00:11:52.342 14:31:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:52.342 14:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:52.342 14:31:00 -- spdk/autotest.sh@176 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:52.342 14:31:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:52.342 14:31:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.342 14:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 ************************************ 00:11:52.602 START TEST event 00:11:52.602 ************************************ 00:11:52.602 14:31:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:52.602 * Looking for test storage... 00:11:52.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:52.602 14:31:01 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:52.602 14:31:01 -- bdev/nbd_common.sh@6 -- # set -e 00:11:52.602 14:31:01 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:52.602 14:31:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:52.602 14:31:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.602 14:31:01 -- common/autotest_common.sh@10 -- # set +x 00:11:52.602 ************************************ 00:11:52.602 START TEST event_perf 00:11:52.602 ************************************ 00:11:52.602 14:31:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:52.602 Running I/O for 1 seconds...[2024-04-17 14:31:01.161921] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:52.602 [2024-04-17 14:31:01.162048] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:11:52.861 [2024-04-17 14:31:01.301420] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.861 [2024-04-17 14:31:01.363017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.861 [2024-04-17 14:31:01.363136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.861 [2024-04-17 14:31:01.363243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.861 Running I/O for 1 seconds...[2024-04-17 14:31:01.363245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.236 00:11:54.236 lcore 0: 190185 00:11:54.236 lcore 1: 190184 00:11:54.236 lcore 2: 190183 00:11:54.236 lcore 3: 190184 00:11:54.236 done. 00:11:54.236 00:11:54.236 real 0m1.318s 00:11:54.236 user 0m4.139s 00:11:54.236 sys 0m0.052s 00:11:54.236 14:31:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.236 14:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:54.236 ************************************ 00:11:54.236 END TEST event_perf 00:11:54.236 ************************************ 00:11:54.236 14:31:02 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:54.236 14:31:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:54.236 14:31:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.236 14:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:54.236 ************************************ 00:11:54.236 START TEST event_reactor 00:11:54.236 ************************************ 00:11:54.236 14:31:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:54.236 [2024-04-17 14:31:02.577569] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:54.236 [2024-04-17 14:31:02.577671] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59405 ] 00:11:54.236 [2024-04-17 14:31:02.718275] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.236 [2024-04-17 14:31:02.794201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.611 test_start 00:11:55.611 oneshot 00:11:55.611 tick 100 00:11:55.611 tick 100 00:11:55.611 tick 250 00:11:55.611 tick 100 00:11:55.611 tick 100 00:11:55.611 tick 250 00:11:55.611 tick 500 00:11:55.611 tick 100 00:11:55.611 tick 100 00:11:55.611 tick 100 00:11:55.611 tick 250 00:11:55.611 tick 100 00:11:55.611 tick 100 00:11:55.611 test_end 00:11:55.611 ************************************ 00:11:55.611 END TEST event_reactor 00:11:55.611 ************************************ 00:11:55.611 00:11:55.612 real 0m1.335s 00:11:55.612 user 0m1.191s 00:11:55.612 sys 0m0.036s 00:11:55.612 14:31:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:55.612 14:31:03 -- common/autotest_common.sh@10 -- # set +x 00:11:55.612 14:31:03 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:55.612 14:31:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:55.612 14:31:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.612 14:31:03 -- common/autotest_common.sh@10 -- # set +x 00:11:55.612 ************************************ 00:11:55.612 START TEST event_reactor_perf 00:11:55.612 ************************************ 00:11:55.612 14:31:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:55.612 [2024-04-17 14:31:04.010421] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:55.612 [2024-04-17 14:31:04.010512] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59439 ] 00:11:55.612 [2024-04-17 14:31:04.141559] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.870 [2024-04-17 14:31:04.216140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.809 test_start 00:11:56.809 test_end 00:11:56.809 Performance: 348867 events per second 00:11:56.809 ************************************ 00:11:56.809 END TEST event_reactor_perf 00:11:56.809 ************************************ 00:11:56.809 00:11:56.809 real 0m1.315s 00:11:56.809 user 0m1.171s 00:11:56.809 sys 0m0.035s 00:11:56.809 14:31:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:56.809 14:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.809 14:31:05 -- event/event.sh@49 -- # uname -s 00:11:56.809 14:31:05 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:56.809 14:31:05 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:56.809 14:31:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:56.809 14:31:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.809 14:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:57.067 ************************************ 00:11:57.068 START TEST event_scheduler 00:11:57.068 ************************************ 00:11:57.068 14:31:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:57.068 * Looking for test storage... 00:11:57.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:57.068 14:31:05 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:57.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.068 14:31:05 -- scheduler/scheduler.sh@35 -- # scheduler_pid=59511 00:11:57.068 14:31:05 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:57.068 14:31:05 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:57.068 14:31:05 -- scheduler/scheduler.sh@37 -- # waitforlisten 59511 00:11:57.068 14:31:05 -- common/autotest_common.sh@817 -- # '[' -z 59511 ']' 00:11:57.068 14:31:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.068 14:31:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:57.068 14:31:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.068 14:31:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:57.068 14:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:57.068 [2024-04-17 14:31:05.551104] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:11:57.068 [2024-04-17 14:31:05.551199] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:11:57.326 [2024-04-17 14:31:05.690475] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.326 [2024-04-17 14:31:05.762127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.326 [2024-04-17 14:31:05.762217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.326 [2024-04-17 14:31:05.762297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.326 [2024-04-17 14:31:05.762300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.260 14:31:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:58.260 14:31:06 -- common/autotest_common.sh@850 -- # return 0 00:11:58.260 14:31:06 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:58.260 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.260 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.260 POWER: Env isn't set yet! 00:11:58.260 POWER: Attempting to initialise ACPI cpufreq power management... 00:11:58.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:58.260 POWER: Cannot set governor of lcore 0 to userspace 00:11:58.260 POWER: Attempting to initialise PSTAT power management... 00:11:58.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:58.260 POWER: Cannot set governor of lcore 0 to performance 00:11:58.260 POWER: Attempting to initialise AMD PSTATE power management... 00:11:58.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:58.260 POWER: Cannot set governor of lcore 0 to userspace 00:11:58.260 POWER: Attempting to initialise CPPC power management... 00:11:58.260 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:58.260 POWER: Cannot set governor of lcore 0 to userspace 00:11:58.260 POWER: Attempting to initialise VM power management... 00:11:58.260 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:58.260 POWER: Unable to set Power Management Environment for lcore 0 00:11:58.260 [2024-04-17 14:31:06.520701] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:11:58.260 [2024-04-17 14:31:06.520716] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:11:58.260 [2024-04-17 14:31:06.520725] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:11:58.260 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.260 14:31:06 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:58.260 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.260 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 [2024-04-17 14:31:06.577275] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:58.261 14:31:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:58.261 14:31:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 ************************************ 00:11:58.261 START TEST scheduler_create_thread 00:11:58.261 ************************************ 00:11:58.261 14:31:06 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 2 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 3 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 4 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 5 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 6 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 7 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 8 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 9 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 10 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.261 14:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.261 14:31:06 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:58.261 14:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.261 14:31:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.829 14:31:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.829 14:31:07 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:58.829 14:31:07 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:58.829 14:31:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.829 14:31:07 -- common/autotest_common.sh@10 -- # set +x 00:12:00.204 ************************************ 00:12:00.204 END TEST scheduler_create_thread 00:12:00.204 ************************************ 00:12:00.204 14:31:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:00.204 00:12:00.204 real 0m1.751s 00:12:00.204 user 0m0.017s 00:12:00.204 sys 0m0.004s 00:12:00.204 14:31:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:00.204 14:31:08 -- common/autotest_common.sh@10 -- # set +x 00:12:00.204 14:31:08 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:00.204 14:31:08 -- scheduler/scheduler.sh@46 -- # killprocess 59511 00:12:00.204 14:31:08 -- common/autotest_common.sh@936 -- # '[' -z 59511 ']' 00:12:00.204 14:31:08 -- common/autotest_common.sh@940 -- # kill -0 59511 00:12:00.204 14:31:08 -- common/autotest_common.sh@941 -- # uname 00:12:00.204 14:31:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:00.204 14:31:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59511 00:12:00.204 killing process with pid 59511 00:12:00.204 14:31:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:00.204 14:31:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:00.204 14:31:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59511' 00:12:00.204 14:31:08 -- common/autotest_common.sh@955 -- # kill 59511 00:12:00.204 14:31:08 -- common/autotest_common.sh@960 -- # wait 59511 00:12:00.463 [2024-04-17 14:31:08.880059] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:00.723 ************************************ 00:12:00.723 END TEST event_scheduler 00:12:00.723 ************************************ 00:12:00.723 00:12:00.723 real 0m3.651s 00:12:00.723 user 0m6.743s 00:12:00.723 sys 0m0.334s 00:12:00.723 14:31:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:00.723 14:31:09 -- common/autotest_common.sh@10 -- # set +x 00:12:00.723 14:31:09 -- event/event.sh@51 -- # modprobe -n nbd 00:12:00.723 14:31:09 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:00.723 14:31:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:00.723 14:31:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:00.723 14:31:09 -- common/autotest_common.sh@10 -- # set +x 00:12:00.723 ************************************ 00:12:00.723 START TEST app_repeat 00:12:00.723 ************************************ 00:12:00.723 14:31:09 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:12:00.723 14:31:09 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.723 14:31:09 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:00.723 14:31:09 -- event/event.sh@13 -- # local nbd_list 00:12:00.723 14:31:09 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:00.723 14:31:09 -- event/event.sh@14 -- # local bdev_list 00:12:00.723 14:31:09 -- event/event.sh@15 -- # local repeat_times=4 00:12:00.723 14:31:09 -- event/event.sh@17 -- # modprobe nbd 00:12:00.723 14:31:09 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:00.723 14:31:09 -- event/event.sh@19 -- # repeat_pid=59610 00:12:00.723 14:31:09 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:00.723 Process app_repeat pid: 59610 00:12:00.723 14:31:09 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59610' 00:12:00.723 14:31:09 -- event/event.sh@23 -- # for i in {0..2} 00:12:00.723 spdk_app_start Round 0 00:12:00.723 14:31:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:00.723 14:31:09 -- event/event.sh@25 -- # waitforlisten 59610 /var/tmp/spdk-nbd.sock 00:12:00.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:00.723 14:31:09 -- common/autotest_common.sh@817 -- # '[' -z 59610 ']' 00:12:00.723 14:31:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:00.723 14:31:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:00.723 14:31:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:00.723 14:31:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:00.723 14:31:09 -- common/autotest_common.sh@10 -- # set +x 00:12:00.723 [2024-04-17 14:31:09.207701] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:00.723 [2024-04-17 14:31:09.207797] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59610 ] 00:12:00.982 [2024-04-17 14:31:09.343801] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:00.982 [2024-04-17 14:31:09.414486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.982 [2024-04-17 14:31:09.414498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.562 14:31:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:01.562 14:31:10 -- common/autotest_common.sh@850 -- # return 0 00:12:01.562 14:31:10 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:01.821 Malloc0 00:12:02.079 14:31:10 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:02.079 Malloc1 00:12:02.337 14:31:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:02.337 14:31:10 -- bdev/nbd_common.sh@12 -- # local i 00:12:02.338 14:31:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:02.338 14:31:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:02.338 14:31:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:02.597 /dev/nbd0 00:12:02.597 14:31:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:02.597 14:31:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:02.597 14:31:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:02.597 14:31:11 -- common/autotest_common.sh@855 -- # local i 00:12:02.597 14:31:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:02.597 14:31:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:02.597 14:31:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:02.597 14:31:11 -- common/autotest_common.sh@859 -- # break 00:12:02.597 14:31:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:02.597 14:31:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:02.597 14:31:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:02.598 1+0 records in 00:12:02.598 1+0 records out 00:12:02.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222159 s, 18.4 MB/s 00:12:02.598 14:31:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:02.598 14:31:11 -- common/autotest_common.sh@872 -- # size=4096 00:12:02.598 14:31:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:02.598 14:31:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:02.598 14:31:11 -- common/autotest_common.sh@875 -- # return 0 00:12:02.598 14:31:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.598 14:31:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:02.598 14:31:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:02.856 /dev/nbd1 00:12:02.856 14:31:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:02.856 14:31:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:02.856 14:31:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:02.856 14:31:11 -- common/autotest_common.sh@855 -- # local i 00:12:02.856 14:31:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:02.856 14:31:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:02.856 14:31:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:02.856 14:31:11 -- common/autotest_common.sh@859 -- # break 00:12:02.857 14:31:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:02.857 14:31:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:02.857 14:31:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:02.857 1+0 records in 00:12:02.857 1+0 records out 00:12:02.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690173 s, 5.9 MB/s 00:12:02.857 14:31:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:02.857 14:31:11 -- common/autotest_common.sh@872 -- # size=4096 00:12:02.857 14:31:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:02.857 14:31:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:02.857 14:31:11 -- common/autotest_common.sh@875 -- # return 0 00:12:02.857 14:31:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.857 14:31:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:02.857 14:31:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:02.857 14:31:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.857 14:31:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:03.115 { 00:12:03.115 "nbd_device": "/dev/nbd0", 00:12:03.115 "bdev_name": "Malloc0" 00:12:03.115 }, 00:12:03.115 { 00:12:03.115 "nbd_device": "/dev/nbd1", 00:12:03.115 "bdev_name": "Malloc1" 00:12:03.115 } 00:12:03.115 ]' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:03.115 { 00:12:03.115 "nbd_device": "/dev/nbd0", 00:12:03.115 "bdev_name": "Malloc0" 00:12:03.115 }, 00:12:03.115 { 00:12:03.115 "nbd_device": "/dev/nbd1", 00:12:03.115 "bdev_name": "Malloc1" 00:12:03.115 } 00:12:03.115 ]' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:03.115 /dev/nbd1' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:03.115 /dev/nbd1' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@65 -- # count=2 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@95 -- # count=2 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:03.115 256+0 records in 00:12:03.115 256+0 records out 00:12:03.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00750408 s, 140 MB/s 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:03.115 256+0 records in 00:12:03.115 256+0 records out 00:12:03.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033951 s, 30.9 MB/s 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:03.115 256+0 records in 00:12:03.115 256+0 records out 00:12:03.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299094 s, 35.1 MB/s 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.115 14:31:11 -- bdev/nbd_common.sh@51 -- # local i 00:12:03.116 14:31:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.116 14:31:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@41 -- # break 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@41 -- # break 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.683 14:31:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:03.942 14:31:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:03.942 14:31:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:03.942 14:31:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@65 -- # true 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@65 -- # count=0 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@104 -- # count=0 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:04.220 14:31:12 -- bdev/nbd_common.sh@109 -- # return 0 00:12:04.220 14:31:12 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:04.478 14:31:12 -- event/event.sh@35 -- # sleep 3 00:12:04.478 [2024-04-17 14:31:12.977057] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:04.478 [2024-04-17 14:31:13.037453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.478 [2024-04-17 14:31:13.037507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.478 [2024-04-17 14:31:13.068915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:04.478 [2024-04-17 14:31:13.069016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:07.763 spdk_app_start Round 1 00:12:07.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:07.763 14:31:15 -- event/event.sh@23 -- # for i in {0..2} 00:12:07.763 14:31:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:07.763 14:31:15 -- event/event.sh@25 -- # waitforlisten 59610 /var/tmp/spdk-nbd.sock 00:12:07.763 14:31:15 -- common/autotest_common.sh@817 -- # '[' -z 59610 ']' 00:12:07.763 14:31:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:07.763 14:31:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:07.763 14:31:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:07.763 14:31:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:07.763 14:31:15 -- common/autotest_common.sh@10 -- # set +x 00:12:07.763 14:31:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:07.763 14:31:16 -- common/autotest_common.sh@850 -- # return 0 00:12:07.763 14:31:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:07.763 Malloc0 00:12:07.763 14:31:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:08.031 Malloc1 00:12:08.031 14:31:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@12 -- # local i 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:08.031 14:31:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:08.307 /dev/nbd0 00:12:08.307 14:31:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:08.307 14:31:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:08.307 14:31:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:08.307 14:31:16 -- common/autotest_common.sh@855 -- # local i 00:12:08.307 14:31:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:08.307 14:31:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:08.307 14:31:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:08.307 14:31:16 -- common/autotest_common.sh@859 -- # break 00:12:08.307 14:31:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:08.307 14:31:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:08.307 14:31:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:08.307 1+0 records in 00:12:08.307 1+0 records out 00:12:08.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472146 s, 8.7 MB/s 00:12:08.307 14:31:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:08.307 14:31:16 -- common/autotest_common.sh@872 -- # size=4096 00:12:08.307 14:31:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:08.307 14:31:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:08.307 14:31:16 -- common/autotest_common.sh@875 -- # return 0 00:12:08.307 14:31:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.307 14:31:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:08.307 14:31:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:08.566 /dev/nbd1 00:12:08.566 14:31:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:08.566 14:31:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:08.566 14:31:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:08.566 14:31:17 -- common/autotest_common.sh@855 -- # local i 00:12:08.566 14:31:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:08.566 14:31:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:08.566 14:31:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:08.566 14:31:17 -- common/autotest_common.sh@859 -- # break 00:12:08.566 14:31:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:08.566 14:31:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:08.566 14:31:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:08.566 1+0 records in 00:12:08.566 1+0 records out 00:12:08.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730131 s, 5.6 MB/s 00:12:08.566 14:31:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:08.566 14:31:17 -- common/autotest_common.sh@872 -- # size=4096 00:12:08.566 14:31:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:08.566 14:31:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:08.566 14:31:17 -- common/autotest_common.sh@875 -- # return 0 00:12:08.566 14:31:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.566 14:31:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:08.566 14:31:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:08.566 14:31:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.566 14:31:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:09.133 14:31:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:09.133 { 00:12:09.134 "nbd_device": "/dev/nbd0", 00:12:09.134 "bdev_name": "Malloc0" 00:12:09.134 }, 00:12:09.134 { 00:12:09.134 "nbd_device": "/dev/nbd1", 00:12:09.134 "bdev_name": "Malloc1" 00:12:09.134 } 00:12:09.134 ]' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:09.134 { 00:12:09.134 "nbd_device": "/dev/nbd0", 00:12:09.134 "bdev_name": "Malloc0" 00:12:09.134 }, 00:12:09.134 { 00:12:09.134 "nbd_device": "/dev/nbd1", 00:12:09.134 "bdev_name": "Malloc1" 00:12:09.134 } 00:12:09.134 ]' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:09.134 /dev/nbd1' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:09.134 /dev/nbd1' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@65 -- # count=2 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@95 -- # count=2 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:09.134 256+0 records in 00:12:09.134 256+0 records out 00:12:09.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00738753 s, 142 MB/s 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:09.134 256+0 records in 00:12:09.134 256+0 records out 00:12:09.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258445 s, 40.6 MB/s 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:09.134 256+0 records in 00:12:09.134 256+0 records out 00:12:09.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276153 s, 38.0 MB/s 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@51 -- # local i 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.134 14:31:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@41 -- # break 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.392 14:31:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@41 -- # break 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.959 14:31:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@65 -- # true 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@65 -- # count=0 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@104 -- # count=0 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:10.218 14:31:18 -- bdev/nbd_common.sh@109 -- # return 0 00:12:10.218 14:31:18 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:10.785 14:31:19 -- event/event.sh@35 -- # sleep 3 00:12:10.785 [2024-04-17 14:31:19.289917] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:10.785 [2024-04-17 14:31:19.347186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.785 [2024-04-17 14:31:19.347196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.785 [2024-04-17 14:31:19.380117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:10.785 [2024-04-17 14:31:19.380203] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:14.069 spdk_app_start Round 2 00:12:14.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:14.069 14:31:22 -- event/event.sh@23 -- # for i in {0..2} 00:12:14.069 14:31:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:14.069 14:31:22 -- event/event.sh@25 -- # waitforlisten 59610 /var/tmp/spdk-nbd.sock 00:12:14.069 14:31:22 -- common/autotest_common.sh@817 -- # '[' -z 59610 ']' 00:12:14.069 14:31:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:14.069 14:31:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:14.069 14:31:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:14.069 14:31:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:14.069 14:31:22 -- common/autotest_common.sh@10 -- # set +x 00:12:14.069 14:31:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:14.069 14:31:22 -- common/autotest_common.sh@850 -- # return 0 00:12:14.069 14:31:22 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:14.069 Malloc0 00:12:14.069 14:31:22 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:14.636 Malloc1 00:12:14.636 14:31:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@12 -- # local i 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:14.636 14:31:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:14.636 /dev/nbd0 00:12:14.895 14:31:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:14.895 14:31:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:14.895 14:31:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:14.895 14:31:23 -- common/autotest_common.sh@855 -- # local i 00:12:14.895 14:31:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:14.895 14:31:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:14.895 14:31:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:14.895 14:31:23 -- common/autotest_common.sh@859 -- # break 00:12:14.895 14:31:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:14.895 14:31:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:14.895 14:31:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:14.895 1+0 records in 00:12:14.895 1+0 records out 00:12:14.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242153 s, 16.9 MB/s 00:12:14.895 14:31:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:14.895 14:31:23 -- common/autotest_common.sh@872 -- # size=4096 00:12:14.895 14:31:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:14.895 14:31:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:14.895 14:31:23 -- common/autotest_common.sh@875 -- # return 0 00:12:14.895 14:31:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.895 14:31:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:14.895 14:31:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:14.895 /dev/nbd1 00:12:15.153 14:31:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:15.153 14:31:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:15.154 14:31:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:15.154 14:31:23 -- common/autotest_common.sh@855 -- # local i 00:12:15.154 14:31:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:15.154 14:31:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:15.154 14:31:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:15.154 14:31:23 -- common/autotest_common.sh@859 -- # break 00:12:15.154 14:31:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:15.154 14:31:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:15.154 14:31:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:15.154 1+0 records in 00:12:15.154 1+0 records out 00:12:15.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286951 s, 14.3 MB/s 00:12:15.154 14:31:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:15.154 14:31:23 -- common/autotest_common.sh@872 -- # size=4096 00:12:15.154 14:31:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:15.154 14:31:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:15.154 14:31:23 -- common/autotest_common.sh@875 -- # return 0 00:12:15.154 14:31:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:15.154 14:31:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:15.154 14:31:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:15.154 14:31:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.154 14:31:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:15.154 14:31:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:15.154 { 00:12:15.154 "nbd_device": "/dev/nbd0", 00:12:15.154 "bdev_name": "Malloc0" 00:12:15.154 }, 00:12:15.154 { 00:12:15.154 "nbd_device": "/dev/nbd1", 00:12:15.154 "bdev_name": "Malloc1" 00:12:15.154 } 00:12:15.154 ]' 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:15.413 { 00:12:15.413 "nbd_device": "/dev/nbd0", 00:12:15.413 "bdev_name": "Malloc0" 00:12:15.413 }, 00:12:15.413 { 00:12:15.413 "nbd_device": "/dev/nbd1", 00:12:15.413 "bdev_name": "Malloc1" 00:12:15.413 } 00:12:15.413 ]' 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:15.413 /dev/nbd1' 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:15.413 /dev/nbd1' 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@65 -- # count=2 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@95 -- # count=2 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:15.413 256+0 records in 00:12:15.413 256+0 records out 00:12:15.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00846151 s, 124 MB/s 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.413 14:31:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:15.413 256+0 records in 00:12:15.413 256+0 records out 00:12:15.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019449 s, 53.9 MB/s 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:15.414 256+0 records in 00:12:15.414 256+0 records out 00:12:15.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237218 s, 44.2 MB/s 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@51 -- # local i 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.414 14:31:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@41 -- # break 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.673 14:31:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@41 -- # break 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.932 14:31:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@65 -- # true 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@65 -- # count=0 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@104 -- # count=0 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:16.191 14:31:24 -- bdev/nbd_common.sh@109 -- # return 0 00:12:16.191 14:31:24 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:16.450 14:31:25 -- event/event.sh@35 -- # sleep 3 00:12:16.709 [2024-04-17 14:31:25.177303] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:16.709 [2024-04-17 14:31:25.233666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.709 [2024-04-17 14:31:25.233675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.709 [2024-04-17 14:31:25.263298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:16.709 [2024-04-17 14:31:25.263352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:19.996 14:31:28 -- event/event.sh@38 -- # waitforlisten 59610 /var/tmp/spdk-nbd.sock 00:12:19.996 14:31:28 -- common/autotest_common.sh@817 -- # '[' -z 59610 ']' 00:12:19.996 14:31:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:19.996 14:31:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:19.996 14:31:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:19.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:19.996 14:31:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:19.996 14:31:28 -- common/autotest_common.sh@10 -- # set +x 00:12:19.996 14:31:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:19.996 14:31:28 -- common/autotest_common.sh@850 -- # return 0 00:12:19.996 14:31:28 -- event/event.sh@39 -- # killprocess 59610 00:12:19.996 14:31:28 -- common/autotest_common.sh@936 -- # '[' -z 59610 ']' 00:12:19.996 14:31:28 -- common/autotest_common.sh@940 -- # kill -0 59610 00:12:19.996 14:31:28 -- common/autotest_common.sh@941 -- # uname 00:12:19.996 14:31:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.996 14:31:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59610 00:12:19.996 killing process with pid 59610 00:12:19.996 14:31:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:19.996 14:31:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:19.996 14:31:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59610' 00:12:19.996 14:31:28 -- common/autotest_common.sh@955 -- # kill 59610 00:12:19.996 14:31:28 -- common/autotest_common.sh@960 -- # wait 59610 00:12:19.996 spdk_app_start is called in Round 0. 00:12:19.996 Shutdown signal received, stop current app iteration 00:12:19.996 Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 reinitialization... 00:12:19.996 spdk_app_start is called in Round 1. 00:12:19.996 Shutdown signal received, stop current app iteration 00:12:19.996 Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 reinitialization... 00:12:19.996 spdk_app_start is called in Round 2. 00:12:19.996 Shutdown signal received, stop current app iteration 00:12:19.996 Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 reinitialization... 00:12:19.996 spdk_app_start is called in Round 3. 00:12:19.996 Shutdown signal received, stop current app iteration 00:12:19.996 14:31:28 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:19.996 14:31:28 -- event/event.sh@42 -- # return 0 00:12:19.996 00:12:19.996 real 0m19.344s 00:12:19.996 user 0m43.958s 00:12:19.996 sys 0m2.641s 00:12:19.996 14:31:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.996 14:31:28 -- common/autotest_common.sh@10 -- # set +x 00:12:19.996 ************************************ 00:12:19.996 END TEST app_repeat 00:12:19.996 ************************************ 00:12:19.996 14:31:28 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:19.996 14:31:28 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:19.996 14:31:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:19.996 14:31:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.996 14:31:28 -- common/autotest_common.sh@10 -- # set +x 00:12:20.294 ************************************ 00:12:20.294 START TEST cpu_locks 00:12:20.294 ************************************ 00:12:20.294 14:31:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:20.294 * Looking for test storage... 00:12:20.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:20.294 14:31:28 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:20.294 14:31:28 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:20.294 14:31:28 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:20.294 14:31:28 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:20.294 14:31:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:20.294 14:31:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:20.294 14:31:28 -- common/autotest_common.sh@10 -- # set +x 00:12:20.294 ************************************ 00:12:20.294 START TEST default_locks 00:12:20.294 ************************************ 00:12:20.294 14:31:28 -- common/autotest_common.sh@1111 -- # default_locks 00:12:20.294 14:31:28 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60058 00:12:20.294 14:31:28 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:20.294 14:31:28 -- event/cpu_locks.sh@47 -- # waitforlisten 60058 00:12:20.294 14:31:28 -- common/autotest_common.sh@817 -- # '[' -z 60058 ']' 00:12:20.294 14:31:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.294 14:31:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:20.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.294 14:31:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.294 14:31:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:20.294 14:31:28 -- common/autotest_common.sh@10 -- # set +x 00:12:20.294 [2024-04-17 14:31:28.857894] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:20.294 [2024-04-17 14:31:28.858069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60058 ] 00:12:20.552 [2024-04-17 14:31:28.995427] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.552 [2024-04-17 14:31:29.064943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.489 14:31:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:21.489 14:31:29 -- common/autotest_common.sh@850 -- # return 0 00:12:21.489 14:31:29 -- event/cpu_locks.sh@49 -- # locks_exist 60058 00:12:21.489 14:31:29 -- event/cpu_locks.sh@22 -- # lslocks -p 60058 00:12:21.489 14:31:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:21.747 14:31:30 -- event/cpu_locks.sh@50 -- # killprocess 60058 00:12:21.747 14:31:30 -- common/autotest_common.sh@936 -- # '[' -z 60058 ']' 00:12:21.747 14:31:30 -- common/autotest_common.sh@940 -- # kill -0 60058 00:12:21.747 14:31:30 -- common/autotest_common.sh@941 -- # uname 00:12:21.747 14:31:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:21.747 14:31:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60058 00:12:22.005 14:31:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:22.005 killing process with pid 60058 00:12:22.005 14:31:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:22.005 14:31:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60058' 00:12:22.005 14:31:30 -- common/autotest_common.sh@955 -- # kill 60058 00:12:22.005 14:31:30 -- common/autotest_common.sh@960 -- # wait 60058 00:12:22.264 14:31:30 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60058 00:12:22.264 14:31:30 -- common/autotest_common.sh@638 -- # local es=0 00:12:22.264 14:31:30 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60058 00:12:22.264 14:31:30 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:22.264 14:31:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:22.264 14:31:30 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:22.264 14:31:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:22.264 14:31:30 -- common/autotest_common.sh@641 -- # waitforlisten 60058 00:12:22.264 14:31:30 -- common/autotest_common.sh@817 -- # '[' -z 60058 ']' 00:12:22.264 14:31:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.264 14:31:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:22.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.264 14:31:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.264 14:31:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:22.264 14:31:30 -- common/autotest_common.sh@10 -- # set +x 00:12:22.264 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60058) - No such process 00:12:22.264 ERROR: process (pid: 60058) is no longer running 00:12:22.264 14:31:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:22.264 14:31:30 -- common/autotest_common.sh@850 -- # return 1 00:12:22.264 14:31:30 -- common/autotest_common.sh@641 -- # es=1 00:12:22.264 14:31:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:22.264 14:31:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:22.264 14:31:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:22.264 14:31:30 -- event/cpu_locks.sh@54 -- # no_locks 00:12:22.264 14:31:30 -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:22.264 14:31:30 -- event/cpu_locks.sh@26 -- # local lock_files 00:12:22.264 14:31:30 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:22.264 00:12:22.264 real 0m1.859s 00:12:22.264 user 0m2.125s 00:12:22.264 sys 0m0.497s 00:12:22.264 14:31:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:22.264 14:31:30 -- common/autotest_common.sh@10 -- # set +x 00:12:22.264 ************************************ 00:12:22.264 END TEST default_locks 00:12:22.264 ************************************ 00:12:22.264 14:31:30 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:22.264 14:31:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:22.264 14:31:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.264 14:31:30 -- common/autotest_common.sh@10 -- # set +x 00:12:22.264 ************************************ 00:12:22.264 START TEST default_locks_via_rpc 00:12:22.264 ************************************ 00:12:22.264 14:31:30 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:12:22.264 14:31:30 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60114 00:12:22.264 14:31:30 -- event/cpu_locks.sh@63 -- # waitforlisten 60114 00:12:22.264 14:31:30 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:22.264 14:31:30 -- common/autotest_common.sh@817 -- # '[' -z 60114 ']' 00:12:22.264 14:31:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.264 14:31:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:22.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.264 14:31:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.264 14:31:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:22.264 14:31:30 -- common/autotest_common.sh@10 -- # set +x 00:12:22.264 [2024-04-17 14:31:30.823480] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:22.264 [2024-04-17 14:31:30.823588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60114 ] 00:12:22.523 [2024-04-17 14:31:30.962331] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.524 [2024-04-17 14:31:31.021688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.459 14:31:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:23.459 14:31:31 -- common/autotest_common.sh@850 -- # return 0 00:12:23.459 14:31:31 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:23.459 14:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.459 14:31:31 -- common/autotest_common.sh@10 -- # set +x 00:12:23.459 14:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.459 14:31:31 -- event/cpu_locks.sh@67 -- # no_locks 00:12:23.459 14:31:31 -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:23.459 14:31:31 -- event/cpu_locks.sh@26 -- # local lock_files 00:12:23.459 14:31:31 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:23.459 14:31:31 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:23.459 14:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.459 14:31:31 -- common/autotest_common.sh@10 -- # set +x 00:12:23.459 14:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.459 14:31:31 -- event/cpu_locks.sh@71 -- # locks_exist 60114 00:12:23.459 14:31:31 -- event/cpu_locks.sh@22 -- # lslocks -p 60114 00:12:23.459 14:31:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:23.718 14:31:32 -- event/cpu_locks.sh@73 -- # killprocess 60114 00:12:23.718 14:31:32 -- common/autotest_common.sh@936 -- # '[' -z 60114 ']' 00:12:23.718 14:31:32 -- common/autotest_common.sh@940 -- # kill -0 60114 00:12:23.718 14:31:32 -- common/autotest_common.sh@941 -- # uname 00:12:23.718 14:31:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:23.718 14:31:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60114 00:12:23.718 14:31:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:23.718 14:31:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:23.718 14:31:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60114' 00:12:23.718 killing process with pid 60114 00:12:23.718 14:31:32 -- common/autotest_common.sh@955 -- # kill 60114 00:12:23.718 14:31:32 -- common/autotest_common.sh@960 -- # wait 60114 00:12:23.978 00:12:23.978 real 0m1.685s 00:12:23.978 user 0m1.875s 00:12:23.978 sys 0m0.435s 00:12:23.978 14:31:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:23.978 14:31:32 -- common/autotest_common.sh@10 -- # set +x 00:12:23.978 ************************************ 00:12:23.978 END TEST default_locks_via_rpc 00:12:23.978 ************************************ 00:12:23.978 14:31:32 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:23.978 14:31:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:23.978 14:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.978 14:31:32 -- common/autotest_common.sh@10 -- # set +x 00:12:23.978 ************************************ 00:12:23.978 START TEST non_locking_app_on_locked_coremask 00:12:23.978 ************************************ 00:12:23.978 14:31:32 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:12:23.978 14:31:32 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60169 00:12:23.978 14:31:32 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:23.978 14:31:32 -- event/cpu_locks.sh@81 -- # waitforlisten 60169 /var/tmp/spdk.sock 00:12:23.978 14:31:32 -- common/autotest_common.sh@817 -- # '[' -z 60169 ']' 00:12:23.978 14:31:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.978 14:31:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:23.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.978 14:31:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.978 14:31:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:23.978 14:31:32 -- common/autotest_common.sh@10 -- # set +x 00:12:24.237 [2024-04-17 14:31:32.619019] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:24.237 [2024-04-17 14:31:32.619123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60169 ] 00:12:24.237 [2024-04-17 14:31:32.755871] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.237 [2024-04-17 14:31:32.815506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.496 14:31:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:24.496 14:31:32 -- common/autotest_common.sh@850 -- # return 0 00:12:24.496 14:31:32 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60173 00:12:24.496 14:31:32 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:24.496 14:31:32 -- event/cpu_locks.sh@85 -- # waitforlisten 60173 /var/tmp/spdk2.sock 00:12:24.496 14:31:32 -- common/autotest_common.sh@817 -- # '[' -z 60173 ']' 00:12:24.496 14:31:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:24.496 14:31:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:24.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:24.496 14:31:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:24.496 14:31:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:24.496 14:31:32 -- common/autotest_common.sh@10 -- # set +x 00:12:24.496 [2024-04-17 14:31:33.039214] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:24.496 [2024-04-17 14:31:33.039322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60173 ] 00:12:24.755 [2024-04-17 14:31:33.185471] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:24.755 [2024-04-17 14:31:33.185550] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.755 [2024-04-17 14:31:33.300604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.691 14:31:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:25.691 14:31:34 -- common/autotest_common.sh@850 -- # return 0 00:12:25.691 14:31:34 -- event/cpu_locks.sh@87 -- # locks_exist 60169 00:12:25.691 14:31:34 -- event/cpu_locks.sh@22 -- # lslocks -p 60169 00:12:25.691 14:31:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:26.258 14:31:34 -- event/cpu_locks.sh@89 -- # killprocess 60169 00:12:26.258 14:31:34 -- common/autotest_common.sh@936 -- # '[' -z 60169 ']' 00:12:26.258 14:31:34 -- common/autotest_common.sh@940 -- # kill -0 60169 00:12:26.258 14:31:34 -- common/autotest_common.sh@941 -- # uname 00:12:26.258 14:31:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.258 14:31:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60169 00:12:26.258 14:31:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.258 14:31:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.258 killing process with pid 60169 00:12:26.258 14:31:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60169' 00:12:26.258 14:31:34 -- common/autotest_common.sh@955 -- # kill 60169 00:12:26.258 14:31:34 -- common/autotest_common.sh@960 -- # wait 60169 00:12:26.825 14:31:35 -- event/cpu_locks.sh@90 -- # killprocess 60173 00:12:26.825 14:31:35 -- common/autotest_common.sh@936 -- # '[' -z 60173 ']' 00:12:26.825 14:31:35 -- common/autotest_common.sh@940 -- # kill -0 60173 00:12:26.825 14:31:35 -- common/autotest_common.sh@941 -- # uname 00:12:26.825 14:31:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.825 14:31:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60173 00:12:27.084 14:31:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:27.084 14:31:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:27.084 killing process with pid 60173 00:12:27.084 14:31:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60173' 00:12:27.084 14:31:35 -- common/autotest_common.sh@955 -- # kill 60173 00:12:27.084 14:31:35 -- common/autotest_common.sh@960 -- # wait 60173 00:12:27.343 00:12:27.343 real 0m3.151s 00:12:27.343 user 0m3.671s 00:12:27.343 sys 0m0.845s 00:12:27.343 14:31:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:27.343 14:31:35 -- common/autotest_common.sh@10 -- # set +x 00:12:27.343 ************************************ 00:12:27.343 END TEST non_locking_app_on_locked_coremask 00:12:27.343 ************************************ 00:12:27.343 14:31:35 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:27.343 14:31:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:27.343 14:31:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.343 14:31:35 -- common/autotest_common.sh@10 -- # set +x 00:12:27.343 ************************************ 00:12:27.343 START TEST locking_app_on_unlocked_coremask 00:12:27.343 ************************************ 00:12:27.343 14:31:35 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:12:27.343 14:31:35 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60244 00:12:27.343 14:31:35 -- event/cpu_locks.sh@99 -- # waitforlisten 60244 /var/tmp/spdk.sock 00:12:27.343 14:31:35 -- common/autotest_common.sh@817 -- # '[' -z 60244 ']' 00:12:27.343 14:31:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.343 14:31:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.343 14:31:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.343 14:31:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.343 14:31:35 -- common/autotest_common.sh@10 -- # set +x 00:12:27.343 14:31:35 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:27.343 [2024-04-17 14:31:35.880218] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:27.343 [2024-04-17 14:31:35.880316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60244 ] 00:12:27.602 [2024-04-17 14:31:36.018355] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:27.602 [2024-04-17 14:31:36.018413] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.602 [2024-04-17 14:31:36.090063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.538 14:31:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:28.538 14:31:36 -- common/autotest_common.sh@850 -- # return 0 00:12:28.538 14:31:36 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60260 00:12:28.538 14:31:36 -- event/cpu_locks.sh@103 -- # waitforlisten 60260 /var/tmp/spdk2.sock 00:12:28.538 14:31:36 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:28.538 14:31:36 -- common/autotest_common.sh@817 -- # '[' -z 60260 ']' 00:12:28.538 14:31:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:28.538 14:31:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:28.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:28.539 14:31:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:28.539 14:31:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:28.539 14:31:36 -- common/autotest_common.sh@10 -- # set +x 00:12:28.539 [2024-04-17 14:31:36.913500] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:28.539 [2024-04-17 14:31:36.913595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60260 ] 00:12:28.539 [2024-04-17 14:31:37.057571] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.796 [2024-04-17 14:31:37.172208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.731 14:31:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:29.731 14:31:37 -- common/autotest_common.sh@850 -- # return 0 00:12:29.731 14:31:37 -- event/cpu_locks.sh@105 -- # locks_exist 60260 00:12:29.731 14:31:37 -- event/cpu_locks.sh@22 -- # lslocks -p 60260 00:12:29.731 14:31:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:30.297 14:31:38 -- event/cpu_locks.sh@107 -- # killprocess 60244 00:12:30.297 14:31:38 -- common/autotest_common.sh@936 -- # '[' -z 60244 ']' 00:12:30.297 14:31:38 -- common/autotest_common.sh@940 -- # kill -0 60244 00:12:30.297 14:31:38 -- common/autotest_common.sh@941 -- # uname 00:12:30.297 14:31:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.297 14:31:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60244 00:12:30.297 14:31:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.297 killing process with pid 60244 00:12:30.297 14:31:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.297 14:31:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60244' 00:12:30.298 14:31:38 -- common/autotest_common.sh@955 -- # kill 60244 00:12:30.298 14:31:38 -- common/autotest_common.sh@960 -- # wait 60244 00:12:30.865 14:31:39 -- event/cpu_locks.sh@108 -- # killprocess 60260 00:12:30.865 14:31:39 -- common/autotest_common.sh@936 -- # '[' -z 60260 ']' 00:12:30.865 14:31:39 -- common/autotest_common.sh@940 -- # kill -0 60260 00:12:30.865 14:31:39 -- common/autotest_common.sh@941 -- # uname 00:12:30.865 14:31:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.865 14:31:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60260 00:12:30.865 14:31:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.865 14:31:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.865 14:31:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60260' 00:12:30.865 killing process with pid 60260 00:12:30.865 14:31:39 -- common/autotest_common.sh@955 -- # kill 60260 00:12:30.865 14:31:39 -- common/autotest_common.sh@960 -- # wait 60260 00:12:31.434 00:12:31.434 real 0m3.929s 00:12:31.434 user 0m4.670s 00:12:31.434 sys 0m0.935s 00:12:31.434 14:31:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.434 ************************************ 00:12:31.434 END TEST locking_app_on_unlocked_coremask 00:12:31.434 ************************************ 00:12:31.434 14:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:31.434 14:31:39 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:31.434 14:31:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:31.434 14:31:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.434 14:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:31.434 ************************************ 00:12:31.434 START TEST locking_app_on_locked_coremask 00:12:31.434 ************************************ 00:12:31.434 14:31:39 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:12:31.434 14:31:39 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60330 00:12:31.434 14:31:39 -- event/cpu_locks.sh@116 -- # waitforlisten 60330 /var/tmp/spdk.sock 00:12:31.434 14:31:39 -- common/autotest_common.sh@817 -- # '[' -z 60330 ']' 00:12:31.434 14:31:39 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:31.434 14:31:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.434 14:31:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:31.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.434 14:31:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.434 14:31:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:31.434 14:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:31.434 [2024-04-17 14:31:39.919542] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:31.434 [2024-04-17 14:31:39.919656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60330 ] 00:12:31.692 [2024-04-17 14:31:40.056434] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.692 [2024-04-17 14:31:40.123718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.692 14:31:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:31.692 14:31:40 -- common/autotest_common.sh@850 -- # return 0 00:12:31.692 14:31:40 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60334 00:12:31.693 14:31:40 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:31.693 14:31:40 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60334 /var/tmp/spdk2.sock 00:12:31.693 14:31:40 -- common/autotest_common.sh@638 -- # local es=0 00:12:31.693 14:31:40 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60334 /var/tmp/spdk2.sock 00:12:31.693 14:31:40 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:31.693 14:31:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:31.693 14:31:40 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:31.693 14:31:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:31.693 14:31:40 -- common/autotest_common.sh@641 -- # waitforlisten 60334 /var/tmp/spdk2.sock 00:12:31.693 14:31:40 -- common/autotest_common.sh@817 -- # '[' -z 60334 ']' 00:12:31.693 14:31:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:31.693 14:31:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:31.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:31.693 14:31:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:31.693 14:31:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:31.693 14:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:31.951 [2024-04-17 14:31:40.343058] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:31.951 [2024-04-17 14:31:40.343155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60334 ] 00:12:31.951 [2024-04-17 14:31:40.492630] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60330 has claimed it. 00:12:31.951 [2024-04-17 14:31:40.492711] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:32.518 ERROR: process (pid: 60334) is no longer running 00:12:32.518 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60334) - No such process 00:12:32.518 14:31:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:32.518 14:31:41 -- common/autotest_common.sh@850 -- # return 1 00:12:32.518 14:31:41 -- common/autotest_common.sh@641 -- # es=1 00:12:32.518 14:31:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:32.518 14:31:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:32.518 14:31:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:32.518 14:31:41 -- event/cpu_locks.sh@122 -- # locks_exist 60330 00:12:32.518 14:31:41 -- event/cpu_locks.sh@22 -- # lslocks -p 60330 00:12:32.518 14:31:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:33.085 14:31:41 -- event/cpu_locks.sh@124 -- # killprocess 60330 00:12:33.085 14:31:41 -- common/autotest_common.sh@936 -- # '[' -z 60330 ']' 00:12:33.085 14:31:41 -- common/autotest_common.sh@940 -- # kill -0 60330 00:12:33.085 14:31:41 -- common/autotest_common.sh@941 -- # uname 00:12:33.085 14:31:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:33.085 14:31:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60330 00:12:33.085 14:31:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:33.085 14:31:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:33.085 killing process with pid 60330 00:12:33.085 14:31:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60330' 00:12:33.085 14:31:41 -- common/autotest_common.sh@955 -- # kill 60330 00:12:33.085 14:31:41 -- common/autotest_common.sh@960 -- # wait 60330 00:12:33.343 00:12:33.343 real 0m1.931s 00:12:33.343 user 0m2.255s 00:12:33.343 sys 0m0.487s 00:12:33.343 14:31:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:33.343 14:31:41 -- common/autotest_common.sh@10 -- # set +x 00:12:33.343 ************************************ 00:12:33.343 END TEST locking_app_on_locked_coremask 00:12:33.343 ************************************ 00:12:33.343 14:31:41 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:33.343 14:31:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:33.343 14:31:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.343 14:31:41 -- common/autotest_common.sh@10 -- # set +x 00:12:33.343 ************************************ 00:12:33.343 START TEST locking_overlapped_coremask 00:12:33.343 ************************************ 00:12:33.343 14:31:41 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:12:33.343 14:31:41 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60384 00:12:33.343 14:31:41 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:33.343 14:31:41 -- event/cpu_locks.sh@133 -- # waitforlisten 60384 /var/tmp/spdk.sock 00:12:33.343 14:31:41 -- common/autotest_common.sh@817 -- # '[' -z 60384 ']' 00:12:33.343 14:31:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.343 14:31:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:33.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.343 14:31:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.343 14:31:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:33.343 14:31:41 -- common/autotest_common.sh@10 -- # set +x 00:12:33.604 [2024-04-17 14:31:41.952429] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:33.604 [2024-04-17 14:31:41.952506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:12:33.604 [2024-04-17 14:31:42.088913] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.604 [2024-04-17 14:31:42.158007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.604 [2024-04-17 14:31:42.158147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.604 [2024-04-17 14:31:42.158153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.545 14:31:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:34.545 14:31:42 -- common/autotest_common.sh@850 -- # return 0 00:12:34.545 14:31:42 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60402 00:12:34.545 14:31:42 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:34.545 14:31:42 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60402 /var/tmp/spdk2.sock 00:12:34.545 14:31:42 -- common/autotest_common.sh@638 -- # local es=0 00:12:34.545 14:31:42 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60402 /var/tmp/spdk2.sock 00:12:34.546 14:31:42 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:34.546 14:31:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:34.546 14:31:42 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:34.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:34.546 14:31:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:34.546 14:31:42 -- common/autotest_common.sh@641 -- # waitforlisten 60402 /var/tmp/spdk2.sock 00:12:34.546 14:31:42 -- common/autotest_common.sh@817 -- # '[' -z 60402 ']' 00:12:34.546 14:31:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:34.546 14:31:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:34.546 14:31:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:34.546 14:31:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:34.546 14:31:42 -- common/autotest_common.sh@10 -- # set +x 00:12:34.546 [2024-04-17 14:31:42.933272] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:34.546 [2024-04-17 14:31:42.933356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60402 ] 00:12:34.546 [2024-04-17 14:31:43.074704] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60384 has claimed it. 00:12:34.546 [2024-04-17 14:31:43.074777] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:35.113 ERROR: process (pid: 60402) is no longer running 00:12:35.113 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60402) - No such process 00:12:35.113 14:31:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:35.113 14:31:43 -- common/autotest_common.sh@850 -- # return 1 00:12:35.113 14:31:43 -- common/autotest_common.sh@641 -- # es=1 00:12:35.113 14:31:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:35.113 14:31:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:35.113 14:31:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:35.113 14:31:43 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:35.113 14:31:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:35.113 14:31:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:35.113 14:31:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:35.113 14:31:43 -- event/cpu_locks.sh@141 -- # killprocess 60384 00:12:35.113 14:31:43 -- common/autotest_common.sh@936 -- # '[' -z 60384 ']' 00:12:35.113 14:31:43 -- common/autotest_common.sh@940 -- # kill -0 60384 00:12:35.113 14:31:43 -- common/autotest_common.sh@941 -- # uname 00:12:35.113 14:31:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:35.113 14:31:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60384 00:12:35.372 14:31:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:35.372 14:31:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:35.372 14:31:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60384' 00:12:35.372 killing process with pid 60384 00:12:35.372 14:31:43 -- common/autotest_common.sh@955 -- # kill 60384 00:12:35.372 14:31:43 -- common/autotest_common.sh@960 -- # wait 60384 00:12:35.630 00:12:35.630 real 0m2.096s 00:12:35.630 user 0m5.962s 00:12:35.630 sys 0m0.308s 00:12:35.630 14:31:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:35.630 14:31:43 -- common/autotest_common.sh@10 -- # set +x 00:12:35.630 ************************************ 00:12:35.630 END TEST locking_overlapped_coremask 00:12:35.630 ************************************ 00:12:35.630 14:31:44 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:35.630 14:31:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:35.630 14:31:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.630 14:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:35.630 ************************************ 00:12:35.630 START TEST locking_overlapped_coremask_via_rpc 00:12:35.630 ************************************ 00:12:35.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.630 14:31:44 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:12:35.630 14:31:44 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60451 00:12:35.630 14:31:44 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:35.630 14:31:44 -- event/cpu_locks.sh@149 -- # waitforlisten 60451 /var/tmp/spdk.sock 00:12:35.630 14:31:44 -- common/autotest_common.sh@817 -- # '[' -z 60451 ']' 00:12:35.630 14:31:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.630 14:31:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:35.630 14:31:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.630 14:31:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:35.630 14:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:35.630 [2024-04-17 14:31:44.168135] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:35.630 [2024-04-17 14:31:44.168235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60451 ] 00:12:35.891 [2024-04-17 14:31:44.307707] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:35.891 [2024-04-17 14:31:44.307766] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.891 [2024-04-17 14:31:44.378636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.891 [2024-04-17 14:31:44.378781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.891 [2024-04-17 14:31:44.378789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.828 14:31:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:36.828 14:31:45 -- common/autotest_common.sh@850 -- # return 0 00:12:36.828 14:31:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60469 00:12:36.828 14:31:45 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:36.828 14:31:45 -- event/cpu_locks.sh@153 -- # waitforlisten 60469 /var/tmp/spdk2.sock 00:12:36.828 14:31:45 -- common/autotest_common.sh@817 -- # '[' -z 60469 ']' 00:12:36.828 14:31:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:36.828 14:31:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:36.828 14:31:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:36.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:36.828 14:31:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:36.828 14:31:45 -- common/autotest_common.sh@10 -- # set +x 00:12:36.828 [2024-04-17 14:31:45.227250] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:36.828 [2024-04-17 14:31:45.227543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60469 ] 00:12:36.828 [2024-04-17 14:31:45.370305] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:36.828 [2024-04-17 14:31:45.370360] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:37.086 [2024-04-17 14:31:45.490578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.086 [2024-04-17 14:31:45.494054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:37.086 [2024-04-17 14:31:45.494056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.698 14:31:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:37.698 14:31:46 -- common/autotest_common.sh@850 -- # return 0 00:12:37.698 14:31:46 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:37.698 14:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.698 14:31:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.698 14:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.698 14:31:46 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:37.698 14:31:46 -- common/autotest_common.sh@638 -- # local es=0 00:12:37.698 14:31:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:37.698 14:31:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:12:37.698 14:31:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:37.698 14:31:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:12:37.698 14:31:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:37.698 14:31:46 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:37.698 14:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.698 14:31:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.698 [2024-04-17 14:31:46.245075] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60451 has claimed it. 00:12:37.698 request: 00:12:37.698 { 00:12:37.698 "method": "framework_enable_cpumask_locks", 00:12:37.698 "req_id": 1 00:12:37.698 } 00:12:37.698 Got JSON-RPC error response 00:12:37.698 response: 00:12:37.698 { 00:12:37.698 "code": -32603, 00:12:37.698 "message": "Failed to claim CPU core: 2" 00:12:37.698 } 00:12:37.698 14:31:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:37.698 14:31:46 -- common/autotest_common.sh@641 -- # es=1 00:12:37.698 14:31:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:37.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.698 14:31:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:37.698 14:31:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:37.698 14:31:46 -- event/cpu_locks.sh@158 -- # waitforlisten 60451 /var/tmp/spdk.sock 00:12:37.698 14:31:46 -- common/autotest_common.sh@817 -- # '[' -z 60451 ']' 00:12:37.698 14:31:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.698 14:31:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:37.698 14:31:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.698 14:31:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:37.698 14:31:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.976 14:31:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:37.976 14:31:46 -- common/autotest_common.sh@850 -- # return 0 00:12:37.976 14:31:46 -- event/cpu_locks.sh@159 -- # waitforlisten 60469 /var/tmp/spdk2.sock 00:12:37.976 14:31:46 -- common/autotest_common.sh@817 -- # '[' -z 60469 ']' 00:12:37.976 14:31:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:37.976 14:31:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:37.976 14:31:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:37.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:37.976 14:31:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:37.976 14:31:46 -- common/autotest_common.sh@10 -- # set +x 00:12:38.234 ************************************ 00:12:38.234 END TEST locking_overlapped_coremask_via_rpc 00:12:38.234 ************************************ 00:12:38.234 14:31:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.234 14:31:46 -- common/autotest_common.sh@850 -- # return 0 00:12:38.234 14:31:46 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:38.234 14:31:46 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:38.234 14:31:46 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:38.234 14:31:46 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:38.234 00:12:38.234 real 0m2.724s 00:12:38.234 user 0m1.443s 00:12:38.234 sys 0m0.199s 00:12:38.234 14:31:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:38.234 14:31:46 -- common/autotest_common.sh@10 -- # set +x 00:12:38.493 14:31:46 -- event/cpu_locks.sh@174 -- # cleanup 00:12:38.493 14:31:46 -- event/cpu_locks.sh@15 -- # [[ -z 60451 ]] 00:12:38.493 14:31:46 -- event/cpu_locks.sh@15 -- # killprocess 60451 00:12:38.493 14:31:46 -- common/autotest_common.sh@936 -- # '[' -z 60451 ']' 00:12:38.493 14:31:46 -- common/autotest_common.sh@940 -- # kill -0 60451 00:12:38.493 14:31:46 -- common/autotest_common.sh@941 -- # uname 00:12:38.493 14:31:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:38.493 14:31:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60451 00:12:38.493 14:31:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:38.493 killing process with pid 60451 00:12:38.493 14:31:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:38.493 14:31:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60451' 00:12:38.493 14:31:46 -- common/autotest_common.sh@955 -- # kill 60451 00:12:38.493 14:31:46 -- common/autotest_common.sh@960 -- # wait 60451 00:12:38.752 14:31:47 -- event/cpu_locks.sh@16 -- # [[ -z 60469 ]] 00:12:38.752 14:31:47 -- event/cpu_locks.sh@16 -- # killprocess 60469 00:12:38.752 14:31:47 -- common/autotest_common.sh@936 -- # '[' -z 60469 ']' 00:12:38.752 14:31:47 -- common/autotest_common.sh@940 -- # kill -0 60469 00:12:38.752 14:31:47 -- common/autotest_common.sh@941 -- # uname 00:12:38.752 14:31:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:38.752 14:31:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60469 00:12:38.752 killing process with pid 60469 00:12:38.752 14:31:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:38.752 14:31:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:38.752 14:31:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60469' 00:12:38.752 14:31:47 -- common/autotest_common.sh@955 -- # kill 60469 00:12:38.752 14:31:47 -- common/autotest_common.sh@960 -- # wait 60469 00:12:39.011 14:31:47 -- event/cpu_locks.sh@18 -- # rm -f 00:12:39.011 14:31:47 -- event/cpu_locks.sh@1 -- # cleanup 00:12:39.011 14:31:47 -- event/cpu_locks.sh@15 -- # [[ -z 60451 ]] 00:12:39.011 14:31:47 -- event/cpu_locks.sh@15 -- # killprocess 60451 00:12:39.011 Process with pid 60451 is not found 00:12:39.011 Process with pid 60469 is not found 00:12:39.011 14:31:47 -- common/autotest_common.sh@936 -- # '[' -z 60451 ']' 00:12:39.011 14:31:47 -- common/autotest_common.sh@940 -- # kill -0 60451 00:12:39.011 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60451) - No such process 00:12:39.011 14:31:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60451 is not found' 00:12:39.011 14:31:47 -- event/cpu_locks.sh@16 -- # [[ -z 60469 ]] 00:12:39.011 14:31:47 -- event/cpu_locks.sh@16 -- # killprocess 60469 00:12:39.011 14:31:47 -- common/autotest_common.sh@936 -- # '[' -z 60469 ']' 00:12:39.011 14:31:47 -- common/autotest_common.sh@940 -- # kill -0 60469 00:12:39.011 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60469) - No such process 00:12:39.011 14:31:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60469 is not found' 00:12:39.011 14:31:47 -- event/cpu_locks.sh@18 -- # rm -f 00:12:39.011 ************************************ 00:12:39.011 END TEST cpu_locks 00:12:39.011 ************************************ 00:12:39.011 00:12:39.011 real 0m18.870s 00:12:39.011 user 0m34.468s 00:12:39.011 sys 0m4.520s 00:12:39.011 14:31:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:39.011 14:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:39.011 ************************************ 00:12:39.011 END TEST event 00:12:39.011 ************************************ 00:12:39.011 00:12:39.011 real 0m46.550s 00:12:39.011 user 1m31.930s 00:12:39.011 sys 0m7.994s 00:12:39.011 14:31:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:39.011 14:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:39.011 14:31:47 -- spdk/autotest.sh@177 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:39.011 14:31:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:39.011 14:31:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.011 14:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:39.269 ************************************ 00:12:39.269 START TEST thread 00:12:39.269 ************************************ 00:12:39.269 14:31:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:39.269 * Looking for test storage... 00:12:39.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:39.269 14:31:47 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:39.269 14:31:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:39.269 14:31:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.269 14:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:39.269 ************************************ 00:12:39.269 START TEST thread_poller_perf 00:12:39.269 ************************************ 00:12:39.269 14:31:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:39.269 [2024-04-17 14:31:47.827240] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:39.269 [2024-04-17 14:31:47.827525] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60602 ] 00:12:39.528 [2024-04-17 14:31:47.963633] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.528 [2024-04-17 14:31:48.021337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.528 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:40.904 ====================================== 00:12:40.904 busy:2208891763 (cyc) 00:12:40.904 total_run_count: 286000 00:12:40.904 tsc_hz: 2200000000 (cyc) 00:12:40.904 ====================================== 00:12:40.904 poller_cost: 7723 (cyc), 3510 (nsec) 00:12:40.904 00:12:40.904 real 0m1.316s 00:12:40.904 user 0m1.171s 00:12:40.904 sys 0m0.036s 00:12:40.904 14:31:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:40.904 14:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.904 ************************************ 00:12:40.904 END TEST thread_poller_perf 00:12:40.904 ************************************ 00:12:40.904 14:31:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:40.904 14:31:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:40.904 14:31:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.904 14:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.904 ************************************ 00:12:40.904 START TEST thread_poller_perf 00:12:40.904 ************************************ 00:12:40.904 14:31:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:40.904 [2024-04-17 14:31:49.264274] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:40.904 [2024-04-17 14:31:49.264358] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:12:40.904 [2024-04-17 14:31:49.409931] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.904 [2024-04-17 14:31:49.478921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.904 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:42.285 ====================================== 00:12:42.285 busy:2202988232 (cyc) 00:12:42.285 total_run_count: 3608000 00:12:42.285 tsc_hz: 2200000000 (cyc) 00:12:42.285 ====================================== 00:12:42.285 poller_cost: 610 (cyc), 277 (nsec) 00:12:42.285 ************************************ 00:12:42.285 END TEST thread_poller_perf 00:12:42.285 ************************************ 00:12:42.285 00:12:42.285 real 0m1.332s 00:12:42.285 user 0m1.179s 00:12:42.285 sys 0m0.045s 00:12:42.285 14:31:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:42.285 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.285 14:31:50 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:42.285 ************************************ 00:12:42.285 END TEST thread 00:12:42.285 ************************************ 00:12:42.285 00:12:42.285 real 0m2.959s 00:12:42.285 user 0m2.463s 00:12:42.285 sys 0m0.248s 00:12:42.285 14:31:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:42.285 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.285 14:31:50 -- spdk/autotest.sh@178 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:42.285 14:31:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:42.285 14:31:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.285 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.285 ************************************ 00:12:42.285 START TEST accel 00:12:42.285 ************************************ 00:12:42.285 14:31:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:42.285 * Looking for test storage... 00:12:42.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:42.285 14:31:50 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:42.285 14:31:50 -- accel/accel.sh@82 -- # get_expected_opcs 00:12:42.285 14:31:50 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:42.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.285 14:31:50 -- accel/accel.sh@62 -- # spdk_tgt_pid=60716 00:12:42.285 14:31:50 -- accel/accel.sh@63 -- # waitforlisten 60716 00:12:42.285 14:31:50 -- common/autotest_common.sh@817 -- # '[' -z 60716 ']' 00:12:42.285 14:31:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.285 14:31:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:42.285 14:31:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.285 14:31:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:42.285 14:31:50 -- accel/accel.sh@61 -- # build_accel_config 00:12:42.285 14:31:50 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:42.285 14:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:42.285 14:31:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:42.285 14:31:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:42.285 14:31:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:42.285 14:31:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:42.285 14:31:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:42.285 14:31:50 -- accel/accel.sh@40 -- # local IFS=, 00:12:42.285 14:31:50 -- accel/accel.sh@41 -- # jq -r . 00:12:42.285 [2024-04-17 14:31:50.862644] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:42.285 [2024-04-17 14:31:50.862907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:12:42.544 [2024-04-17 14:31:50.996607] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.544 [2024-04-17 14:31:51.082923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.480 14:31:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:43.480 14:31:51 -- common/autotest_common.sh@850 -- # return 0 00:12:43.480 14:31:51 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:43.480 14:31:51 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:43.480 14:31:51 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:43.480 14:31:51 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:43.480 14:31:51 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:43.480 14:31:51 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:43.480 14:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.480 14:31:51 -- common/autotest_common.sh@10 -- # set +x 00:12:43.480 14:31:51 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:43.480 14:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.480 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.480 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.480 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.481 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.481 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.481 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.481 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.481 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.481 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.481 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.481 14:31:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # IFS== 00:12:43.481 14:31:51 -- accel/accel.sh@72 -- # read -r opc module 00:12:43.481 14:31:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:43.481 14:31:51 -- accel/accel.sh@75 -- # killprocess 60716 00:12:43.481 14:31:51 -- common/autotest_common.sh@936 -- # '[' -z 60716 ']' 00:12:43.481 14:31:51 -- common/autotest_common.sh@940 -- # kill -0 60716 00:12:43.481 14:31:51 -- common/autotest_common.sh@941 -- # uname 00:12:43.481 14:31:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:43.481 14:31:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60716 00:12:43.481 14:31:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:43.481 14:31:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:43.481 killing process with pid 60716 00:12:43.481 14:31:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60716' 00:12:43.481 14:31:51 -- common/autotest_common.sh@955 -- # kill 60716 00:12:43.481 14:31:51 -- common/autotest_common.sh@960 -- # wait 60716 00:12:43.739 14:31:52 -- accel/accel.sh@76 -- # trap - ERR 00:12:43.739 14:31:52 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:43.739 14:31:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:43.739 14:31:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:43.739 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:12:43.739 14:31:52 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:12:43.739 14:31:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:43.739 14:31:52 -- accel/accel.sh@12 -- # build_accel_config 00:12:43.739 14:31:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:43.739 14:31:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:43.739 14:31:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:43.739 14:31:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:43.739 14:31:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:43.740 14:31:52 -- accel/accel.sh@40 -- # local IFS=, 00:12:43.740 14:31:52 -- accel/accel.sh@41 -- # jq -r . 00:12:43.740 14:31:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:43.740 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:12:43.740 14:31:52 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:43.740 14:31:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:43.740 14:31:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:43.740 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:12:43.998 ************************************ 00:12:43.998 START TEST accel_missing_filename 00:12:43.998 ************************************ 00:12:43.998 14:31:52 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:12:43.998 14:31:52 -- common/autotest_common.sh@638 -- # local es=0 00:12:43.998 14:31:52 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:43.998 14:31:52 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:43.998 14:31:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:43.998 14:31:52 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:43.998 14:31:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:43.998 14:31:52 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:12:43.998 14:31:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:43.998 14:31:52 -- accel/accel.sh@12 -- # build_accel_config 00:12:43.998 14:31:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:43.998 14:31:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:43.998 14:31:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:43.998 14:31:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:43.998 14:31:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:43.998 14:31:52 -- accel/accel.sh@40 -- # local IFS=, 00:12:43.998 14:31:52 -- accel/accel.sh@41 -- # jq -r . 00:12:43.998 [2024-04-17 14:31:52.421968] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:43.998 [2024-04-17 14:31:52.422033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60781 ] 00:12:43.998 [2024-04-17 14:31:52.559177] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.258 [2024-04-17 14:31:52.618998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.258 [2024-04-17 14:31:52.651124] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:44.258 [2024-04-17 14:31:52.692084] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:12:44.258 A filename is required. 00:12:44.258 14:31:52 -- common/autotest_common.sh@641 -- # es=234 00:12:44.258 14:31:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:44.258 14:31:52 -- common/autotest_common.sh@650 -- # es=106 00:12:44.258 14:31:52 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:44.258 14:31:52 -- common/autotest_common.sh@658 -- # es=1 00:12:44.258 14:31:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:44.258 00:12:44.258 real 0m0.391s 00:12:44.258 user 0m0.261s 00:12:44.258 sys 0m0.076s 00:12:44.258 14:31:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.258 ************************************ 00:12:44.258 END TEST accel_missing_filename 00:12:44.258 ************************************ 00:12:44.258 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:12:44.258 14:31:52 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:44.258 14:31:52 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:12:44.258 14:31:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.258 14:31:52 -- common/autotest_common.sh@10 -- # set +x 00:12:44.517 ************************************ 00:12:44.517 START TEST accel_compress_verify 00:12:44.517 ************************************ 00:12:44.517 14:31:52 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:44.517 14:31:52 -- common/autotest_common.sh@638 -- # local es=0 00:12:44.517 14:31:52 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:44.517 14:31:52 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:44.517 14:31:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.517 14:31:52 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:44.517 14:31:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.517 14:31:52 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:44.517 14:31:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:44.517 14:31:52 -- accel/accel.sh@12 -- # build_accel_config 00:12:44.517 14:31:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:44.517 14:31:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:44.517 14:31:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:44.517 14:31:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:44.517 14:31:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:44.517 14:31:52 -- accel/accel.sh@40 -- # local IFS=, 00:12:44.517 14:31:52 -- accel/accel.sh@41 -- # jq -r . 00:12:44.517 [2024-04-17 14:31:52.919585] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:44.518 [2024-04-17 14:31:52.919665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60811 ] 00:12:44.518 [2024-04-17 14:31:53.056797] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.777 [2024-04-17 14:31:53.129528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.777 [2024-04-17 14:31:53.166298] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:44.777 [2024-04-17 14:31:53.211317] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:12:44.777 00:12:44.777 Compression does not support the verify option, aborting. 00:12:44.777 14:31:53 -- common/autotest_common.sh@641 -- # es=161 00:12:44.777 14:31:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:44.777 14:31:53 -- common/autotest_common.sh@650 -- # es=33 00:12:44.777 14:31:53 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:44.777 14:31:53 -- common/autotest_common.sh@658 -- # es=1 00:12:44.777 14:31:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:44.777 00:12:44.777 real 0m0.431s 00:12:44.777 user 0m0.301s 00:12:44.777 sys 0m0.075s 00:12:44.777 14:31:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.777 ************************************ 00:12:44.777 END TEST accel_compress_verify 00:12:44.777 ************************************ 00:12:44.777 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:12:44.777 14:31:53 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:44.777 14:31:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:44.777 14:31:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.777 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:12:45.037 ************************************ 00:12:45.037 START TEST accel_wrong_workload 00:12:45.037 ************************************ 00:12:45.037 14:31:53 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:12:45.037 14:31:53 -- common/autotest_common.sh@638 -- # local es=0 00:12:45.037 14:31:53 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:45.037 14:31:53 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:45.037 14:31:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:45.037 14:31:53 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:45.037 14:31:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:45.037 14:31:53 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:12:45.037 14:31:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:45.037 14:31:53 -- accel/accel.sh@12 -- # build_accel_config 00:12:45.037 14:31:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:45.037 14:31:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:45.037 14:31:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.037 14:31:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:45.037 14:31:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:45.037 14:31:53 -- accel/accel.sh@40 -- # local IFS=, 00:12:45.037 14:31:53 -- accel/accel.sh@41 -- # jq -r . 00:12:45.037 Unsupported workload type: foobar 00:12:45.037 [2024-04-17 14:31:53.460082] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:45.037 accel_perf options: 00:12:45.037 [-h help message] 00:12:45.037 [-q queue depth per core] 00:12:45.037 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:45.037 [-T number of threads per core 00:12:45.037 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:45.037 [-t time in seconds] 00:12:45.037 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:45.037 [ dif_verify, , dif_generate, dif_generate_copy 00:12:45.037 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:45.037 [-l for compress/decompress workloads, name of uncompressed input file 00:12:45.037 [-S for crc32c workload, use this seed value (default 0) 00:12:45.037 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:45.037 [-f for fill workload, use this BYTE value (default 255) 00:12:45.037 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:45.037 [-y verify result if this switch is on] 00:12:45.037 [-a tasks to allocate per core (default: same value as -q)] 00:12:45.037 Can be used to spread operations across a wider range of memory. 00:12:45.037 14:31:53 -- common/autotest_common.sh@641 -- # es=1 00:12:45.037 14:31:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:45.037 14:31:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:45.037 ************************************ 00:12:45.037 END TEST accel_wrong_workload 00:12:45.037 ************************************ 00:12:45.037 14:31:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:45.037 00:12:45.037 real 0m0.030s 00:12:45.037 user 0m0.017s 00:12:45.037 sys 0m0.011s 00:12:45.037 14:31:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.037 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:12:45.037 14:31:53 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:45.037 14:31:53 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:12:45.037 14:31:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.037 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:12:45.037 ************************************ 00:12:45.037 START TEST accel_negative_buffers 00:12:45.037 ************************************ 00:12:45.037 14:31:53 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:45.037 14:31:53 -- common/autotest_common.sh@638 -- # local es=0 00:12:45.037 14:31:53 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:45.037 14:31:53 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:45.037 14:31:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:45.037 14:31:53 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:45.037 14:31:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:45.037 14:31:53 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:12:45.037 14:31:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:45.037 14:31:53 -- accel/accel.sh@12 -- # build_accel_config 00:12:45.037 14:31:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:45.037 14:31:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:45.037 14:31:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.037 14:31:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:45.037 14:31:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:45.037 14:31:53 -- accel/accel.sh@40 -- # local IFS=, 00:12:45.037 14:31:53 -- accel/accel.sh@41 -- # jq -r . 00:12:45.037 -x option must be non-negative. 00:12:45.037 [2024-04-17 14:31:53.597591] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:45.037 accel_perf options: 00:12:45.037 [-h help message] 00:12:45.037 [-q queue depth per core] 00:12:45.037 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:45.037 [-T number of threads per core 00:12:45.037 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:45.037 [-t time in seconds] 00:12:45.037 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:45.037 [ dif_verify, , dif_generate, dif_generate_copy 00:12:45.037 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:45.037 [-l for compress/decompress workloads, name of uncompressed input file 00:12:45.037 [-S for crc32c workload, use this seed value (default 0) 00:12:45.037 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:45.037 [-f for fill workload, use this BYTE value (default 255) 00:12:45.037 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:45.037 [-y verify result if this switch is on] 00:12:45.037 [-a tasks to allocate per core (default: same value as -q)] 00:12:45.037 Can be used to spread operations across a wider range of memory. 00:12:45.037 14:31:53 -- common/autotest_common.sh@641 -- # es=1 00:12:45.037 14:31:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:45.037 14:31:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:45.037 ************************************ 00:12:45.037 END TEST accel_negative_buffers 00:12:45.037 ************************************ 00:12:45.037 14:31:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:45.037 00:12:45.037 real 0m0.027s 00:12:45.037 user 0m0.017s 00:12:45.037 sys 0m0.009s 00:12:45.037 14:31:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.037 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:12:45.298 14:31:53 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:45.298 14:31:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:45.298 14:31:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.298 14:31:53 -- common/autotest_common.sh@10 -- # set +x 00:12:45.298 ************************************ 00:12:45.298 START TEST accel_crc32c 00:12:45.298 ************************************ 00:12:45.298 14:31:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:45.298 14:31:53 -- accel/accel.sh@16 -- # local accel_opc 00:12:45.298 14:31:53 -- accel/accel.sh@17 -- # local accel_module 00:12:45.298 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.298 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.298 14:31:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:45.298 14:31:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:45.298 14:31:53 -- accel/accel.sh@12 -- # build_accel_config 00:12:45.298 14:31:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:45.298 14:31:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:45.298 14:31:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.298 14:31:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:45.298 14:31:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:45.298 14:31:53 -- accel/accel.sh@40 -- # local IFS=, 00:12:45.298 14:31:53 -- accel/accel.sh@41 -- # jq -r . 00:12:45.298 [2024-04-17 14:31:53.732677] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:45.298 [2024-04-17 14:31:53.732811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60881 ] 00:12:45.298 [2024-04-17 14:31:53.870911] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.557 [2024-04-17 14:31:53.930204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val= 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val= 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val=0x1 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val= 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val= 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val=crc32c 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val=32 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val= 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val=software 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@22 -- # accel_module=software 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val=32 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val=32 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val=1 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val=Yes 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val= 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:45.557 14:31:53 -- accel/accel.sh@20 -- # val= 00:12:45.557 14:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:45.557 14:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:46.571 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:46.571 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:46.571 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:46.571 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:46.571 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:46.571 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:46.571 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:46.571 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:46.571 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:46.571 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:46.571 ************************************ 00:12:46.571 END TEST accel_crc32c 00:12:46.571 ************************************ 00:12:46.571 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:46.571 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:46.571 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:46.571 14:31:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:46.571 14:31:55 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:46.571 14:31:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:46.571 00:12:46.571 real 0m1.399s 00:12:46.571 user 0m1.223s 00:12:46.571 sys 0m0.079s 00:12:46.571 14:31:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:46.571 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:46.571 14:31:55 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:46.571 14:31:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:46.571 14:31:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.571 14:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:46.829 ************************************ 00:12:46.829 START TEST accel_crc32c_C2 00:12:46.829 ************************************ 00:12:46.829 14:31:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:46.829 14:31:55 -- accel/accel.sh@16 -- # local accel_opc 00:12:46.829 14:31:55 -- accel/accel.sh@17 -- # local accel_module 00:12:46.829 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:46.829 14:31:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:46.829 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:46.829 14:31:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:46.829 14:31:55 -- accel/accel.sh@12 -- # build_accel_config 00:12:46.829 14:31:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:46.829 14:31:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:46.829 14:31:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:46.829 14:31:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:46.829 14:31:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:46.829 14:31:55 -- accel/accel.sh@40 -- # local IFS=, 00:12:46.829 14:31:55 -- accel/accel.sh@41 -- # jq -r . 00:12:46.829 [2024-04-17 14:31:55.247099] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:46.829 [2024-04-17 14:31:55.247186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60920 ] 00:12:46.829 [2024-04-17 14:31:55.385414] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.085 [2024-04-17 14:31:55.455522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.085 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:47.085 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.085 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:47.085 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.085 14:31:55 -- accel/accel.sh@20 -- # val=0x1 00:12:47.085 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.085 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:47.085 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.085 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:47.085 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.085 14:31:55 -- accel/accel.sh@20 -- # val=crc32c 00:12:47.085 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.085 14:31:55 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.085 14:31:55 -- accel/accel.sh@20 -- # val=0 00:12:47.085 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.085 14:31:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:47.085 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.085 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val=software 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@22 -- # accel_module=software 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val=32 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val=32 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val=1 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val=Yes 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:47.086 14:31:55 -- accel/accel.sh@20 -- # val= 00:12:47.086 14:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:47.086 14:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:56 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:56 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:56 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:56 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:56 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 ************************************ 00:12:48.461 END TEST accel_crc32c_C2 00:12:48.461 ************************************ 00:12:48.461 14:31:56 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:48.461 14:31:56 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:48.461 14:31:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:48.461 00:12:48.461 real 0m1.420s 00:12:48.461 user 0m1.242s 00:12:48.461 sys 0m0.080s 00:12:48.461 14:31:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:48.461 14:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:48.461 14:31:56 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:48.461 14:31:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:48.461 14:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.461 14:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:48.461 ************************************ 00:12:48.461 START TEST accel_copy 00:12:48.461 ************************************ 00:12:48.461 14:31:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:12:48.461 14:31:56 -- accel/accel.sh@16 -- # local accel_opc 00:12:48.461 14:31:56 -- accel/accel.sh@17 -- # local accel_module 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:48.461 14:31:56 -- accel/accel.sh@12 -- # build_accel_config 00:12:48.461 14:31:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:48.461 14:31:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.461 14:31:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.461 14:31:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.461 14:31:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.461 14:31:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.461 14:31:56 -- accel/accel.sh@40 -- # local IFS=, 00:12:48.461 14:31:56 -- accel/accel.sh@41 -- # jq -r . 00:12:48.461 [2024-04-17 14:31:56.780126] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:48.461 [2024-04-17 14:31:56.780218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60958 ] 00:12:48.461 [2024-04-17 14:31:56.916509] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.461 [2024-04-17 14:31:56.975119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val=0x1 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val=copy 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@23 -- # accel_opc=copy 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val= 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val=software 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@22 -- # accel_module=software 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val=32 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val=32 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.461 14:31:57 -- accel/accel.sh@20 -- # val=1 00:12:48.461 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.461 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.462 14:31:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:48.462 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.462 14:31:57 -- accel/accel.sh@20 -- # val=Yes 00:12:48.462 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.462 14:31:57 -- accel/accel.sh@20 -- # val= 00:12:48.462 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:48.462 14:31:57 -- accel/accel.sh@20 -- # val= 00:12:48.462 14:31:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # IFS=: 00:12:48.462 14:31:57 -- accel/accel.sh@19 -- # read -r var val 00:12:49.836 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:49.836 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:49.836 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:49.836 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:49.836 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:49.836 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:49.836 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:49.836 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:49.836 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:49.836 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:49.836 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:49.836 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:49.836 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:49.836 14:31:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:49.836 14:31:58 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:49.836 14:31:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:49.836 00:12:49.836 real 0m1.393s 00:12:49.836 user 0m1.227s 00:12:49.836 sys 0m0.069s 00:12:49.836 14:31:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:49.836 ************************************ 00:12:49.836 END TEST accel_copy 00:12:49.836 ************************************ 00:12:49.836 14:31:58 -- common/autotest_common.sh@10 -- # set +x 00:12:49.836 14:31:58 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:49.836 14:31:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:49.836 14:31:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.836 14:31:58 -- common/autotest_common.sh@10 -- # set +x 00:12:49.837 ************************************ 00:12:49.837 START TEST accel_fill 00:12:49.837 ************************************ 00:12:49.837 14:31:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:49.837 14:31:58 -- accel/accel.sh@16 -- # local accel_opc 00:12:49.837 14:31:58 -- accel/accel.sh@17 -- # local accel_module 00:12:49.837 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:49.837 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:49.837 14:31:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:49.837 14:31:58 -- accel/accel.sh@12 -- # build_accel_config 00:12:49.837 14:31:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:49.837 14:31:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:49.837 14:31:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:49.837 14:31:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:49.837 14:31:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:49.837 14:31:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:49.837 14:31:58 -- accel/accel.sh@40 -- # local IFS=, 00:12:49.837 14:31:58 -- accel/accel.sh@41 -- # jq -r . 00:12:49.837 [2024-04-17 14:31:58.284293] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:49.837 [2024-04-17 14:31:58.284390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:12:49.837 [2024-04-17 14:31:58.420359] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.095 [2024-04-17 14:31:58.488075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val=0x1 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val=fill 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@23 -- # accel_opc=fill 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val=0x80 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val=software 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@22 -- # accel_module=software 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val=64 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val=64 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val=1 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val=Yes 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:50.095 14:31:58 -- accel/accel.sh@20 -- # val= 00:12:50.095 14:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:50.095 14:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:51.470 14:31:59 -- accel/accel.sh@20 -- # val= 00:12:51.470 14:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:51.470 14:31:59 -- accel/accel.sh@20 -- # val= 00:12:51.470 14:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:51.470 14:31:59 -- accel/accel.sh@20 -- # val= 00:12:51.470 14:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:51.470 14:31:59 -- accel/accel.sh@20 -- # val= 00:12:51.470 14:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:51.470 14:31:59 -- accel/accel.sh@20 -- # val= 00:12:51.470 14:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:51.470 14:31:59 -- accel/accel.sh@20 -- # val= 00:12:51.470 14:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.470 14:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:31:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:51.471 14:31:59 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:51.471 14:31:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:51.471 00:12:51.471 real 0m1.410s 00:12:51.471 user 0m1.231s 00:12:51.471 sys 0m0.080s 00:12:51.471 14:31:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:51.471 14:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:51.471 ************************************ 00:12:51.471 END TEST accel_fill 00:12:51.471 ************************************ 00:12:51.471 14:31:59 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:51.471 14:31:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:51.471 14:31:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.471 14:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:51.471 ************************************ 00:12:51.471 START TEST accel_copy_crc32c 00:12:51.471 ************************************ 00:12:51.471 14:31:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:12:51.471 14:31:59 -- accel/accel.sh@16 -- # local accel_opc 00:12:51.471 14:31:59 -- accel/accel.sh@17 -- # local accel_module 00:12:51.471 14:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:31:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:51.471 14:31:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:51.471 14:31:59 -- accel/accel.sh@12 -- # build_accel_config 00:12:51.471 14:31:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:51.471 14:31:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:51.471 14:31:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:51.471 14:31:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:51.471 14:31:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:51.471 14:31:59 -- accel/accel.sh@40 -- # local IFS=, 00:12:51.471 14:31:59 -- accel/accel.sh@41 -- # jq -r . 00:12:51.471 [2024-04-17 14:31:59.806056] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:51.471 [2024-04-17 14:31:59.806139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61037 ] 00:12:51.471 [2024-04-17 14:31:59.942701] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.471 [2024-04-17 14:32:00.013355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val= 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val= 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val=0x1 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val= 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val= 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val=0 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val= 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val=software 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@22 -- # accel_module=software 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val=32 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val=32 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val=1 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val=Yes 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val= 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:51.471 14:32:00 -- accel/accel.sh@20 -- # val= 00:12:51.471 14:32:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # IFS=: 00:12:51.471 14:32:00 -- accel/accel.sh@19 -- # read -r var val 00:12:52.847 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:52.847 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:52.847 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:52.847 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:52.847 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:52.847 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:52.847 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:52.847 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:52.847 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:52.847 ************************************ 00:12:52.847 END TEST accel_copy_crc32c 00:12:52.847 ************************************ 00:12:52.847 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:52.847 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:52.847 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:52.847 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:52.847 14:32:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:52.847 14:32:01 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:52.847 14:32:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:52.847 00:12:52.847 real 0m1.411s 00:12:52.847 user 0m1.231s 00:12:52.847 sys 0m0.082s 00:12:52.847 14:32:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.847 14:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:52.847 14:32:01 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:52.847 14:32:01 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:52.847 14:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.847 14:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:52.847 ************************************ 00:12:52.847 START TEST accel_copy_crc32c_C2 00:12:52.847 ************************************ 00:12:52.847 14:32:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:52.847 14:32:01 -- accel/accel.sh@16 -- # local accel_opc 00:12:52.847 14:32:01 -- accel/accel.sh@17 -- # local accel_module 00:12:52.848 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:52.848 14:32:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:52.848 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:52.848 14:32:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:52.848 14:32:01 -- accel/accel.sh@12 -- # build_accel_config 00:12:52.848 14:32:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:52.848 14:32:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:52.848 14:32:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:52.848 14:32:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:52.848 14:32:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:52.848 14:32:01 -- accel/accel.sh@40 -- # local IFS=, 00:12:52.848 14:32:01 -- accel/accel.sh@41 -- # jq -r . 00:12:52.848 [2024-04-17 14:32:01.309656] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:52.848 [2024-04-17 14:32:01.309733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61076 ] 00:12:52.848 [2024-04-17 14:32:01.446564] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.106 [2024-04-17 14:32:01.515045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val=0x1 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val=0 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val=software 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@22 -- # accel_module=software 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val=32 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val=32 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val=1 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val=Yes 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:53.106 14:32:01 -- accel/accel.sh@20 -- # val= 00:12:53.106 14:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:53.106 14:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:02 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:02 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:02 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:02 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:02 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:02 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:54.484 14:32:02 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:54.484 14:32:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:54.484 00:12:54.484 real 0m1.404s 00:12:54.484 user 0m1.239s 00:12:54.484 sys 0m0.071s 00:12:54.484 14:32:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:54.484 14:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:54.484 ************************************ 00:12:54.484 END TEST accel_copy_crc32c_C2 00:12:54.484 ************************************ 00:12:54.484 14:32:02 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:54.484 14:32:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:54.484 14:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.484 14:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:54.484 ************************************ 00:12:54.484 START TEST accel_dualcast 00:12:54.484 ************************************ 00:12:54.484 14:32:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:12:54.484 14:32:02 -- accel/accel.sh@16 -- # local accel_opc 00:12:54.484 14:32:02 -- accel/accel.sh@17 -- # local accel_module 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:54.484 14:32:02 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:54.484 14:32:02 -- accel/accel.sh@12 -- # build_accel_config 00:12:54.484 14:32:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:54.484 14:32:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:54.484 14:32:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:54.484 14:32:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:54.484 14:32:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:54.484 14:32:02 -- accel/accel.sh@40 -- # local IFS=, 00:12:54.484 14:32:02 -- accel/accel.sh@41 -- # jq -r . 00:12:54.484 [2024-04-17 14:32:02.827272] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:54.484 [2024-04-17 14:32:02.827350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61109 ] 00:12:54.484 [2024-04-17 14:32:02.966270] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.484 [2024-04-17 14:32:03.035761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val=0x1 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val=dualcast 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val= 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val=software 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@22 -- # accel_module=software 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val=32 00:12:54.484 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.484 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.484 14:32:03 -- accel/accel.sh@20 -- # val=32 00:12:54.485 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.485 14:32:03 -- accel/accel.sh@20 -- # val=1 00:12:54.485 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.485 14:32:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:54.485 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.485 14:32:03 -- accel/accel.sh@20 -- # val=Yes 00:12:54.485 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.485 14:32:03 -- accel/accel.sh@20 -- # val= 00:12:54.485 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:54.485 14:32:03 -- accel/accel.sh@20 -- # val= 00:12:54.485 14:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:54.485 14:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:55.862 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:55.862 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:55.862 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:55.862 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:55.862 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:55.862 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:55.862 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:55.862 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:55.862 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:55.862 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:55.862 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:55.862 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:55.862 ************************************ 00:12:55.862 END TEST accel_dualcast 00:12:55.862 ************************************ 00:12:55.862 14:32:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:55.862 14:32:04 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:55.862 14:32:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:55.862 00:12:55.862 real 0m1.414s 00:12:55.862 user 0m1.235s 00:12:55.862 sys 0m0.079s 00:12:55.862 14:32:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:55.862 14:32:04 -- common/autotest_common.sh@10 -- # set +x 00:12:55.862 14:32:04 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:55.862 14:32:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:55.862 14:32:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.862 14:32:04 -- common/autotest_common.sh@10 -- # set +x 00:12:55.862 ************************************ 00:12:55.862 START TEST accel_compare 00:12:55.862 ************************************ 00:12:55.862 14:32:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:12:55.862 14:32:04 -- accel/accel.sh@16 -- # local accel_opc 00:12:55.862 14:32:04 -- accel/accel.sh@17 -- # local accel_module 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:55.862 14:32:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:55.862 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:55.862 14:32:04 -- accel/accel.sh@12 -- # build_accel_config 00:12:55.862 14:32:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:55.862 14:32:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:55.862 14:32:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:55.862 14:32:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:55.862 14:32:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:55.862 14:32:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:55.862 14:32:04 -- accel/accel.sh@40 -- # local IFS=, 00:12:55.862 14:32:04 -- accel/accel.sh@41 -- # jq -r . 00:12:55.862 [2024-04-17 14:32:04.350539] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:55.862 [2024-04-17 14:32:04.350618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61153 ] 00:12:56.120 [2024-04-17 14:32:04.490028] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.120 [2024-04-17 14:32:04.558970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val=0x1 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val=compare 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@23 -- # accel_opc=compare 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val=software 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@22 -- # accel_module=software 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val=32 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val=32 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val=1 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val=Yes 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:56.120 14:32:04 -- accel/accel.sh@20 -- # val= 00:12:56.120 14:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:56.120 14:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:57.497 14:32:05 -- accel/accel.sh@20 -- # val= 00:12:57.497 14:32:05 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # IFS=: 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # read -r var val 00:12:57.497 14:32:05 -- accel/accel.sh@20 -- # val= 00:12:57.497 14:32:05 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # IFS=: 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # read -r var val 00:12:57.497 14:32:05 -- accel/accel.sh@20 -- # val= 00:12:57.497 14:32:05 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # IFS=: 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # read -r var val 00:12:57.497 14:32:05 -- accel/accel.sh@20 -- # val= 00:12:57.497 14:32:05 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # IFS=: 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # read -r var val 00:12:57.497 14:32:05 -- accel/accel.sh@20 -- # val= 00:12:57.497 14:32:05 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # IFS=: 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # read -r var val 00:12:57.497 14:32:05 -- accel/accel.sh@20 -- # val= 00:12:57.497 ************************************ 00:12:57.497 END TEST accel_compare 00:12:57.497 ************************************ 00:12:57.497 14:32:05 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # IFS=: 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # read -r var val 00:12:57.497 14:32:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:57.497 14:32:05 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:57.497 14:32:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:57.497 00:12:57.497 real 0m1.411s 00:12:57.497 user 0m1.236s 00:12:57.497 sys 0m0.078s 00:12:57.497 14:32:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:57.497 14:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:57.497 14:32:05 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:57.497 14:32:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:57.497 14:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.497 14:32:05 -- common/autotest_common.sh@10 -- # set +x 00:12:57.497 ************************************ 00:12:57.497 START TEST accel_xor 00:12:57.497 ************************************ 00:12:57.497 14:32:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:12:57.497 14:32:05 -- accel/accel.sh@16 -- # local accel_opc 00:12:57.497 14:32:05 -- accel/accel.sh@17 -- # local accel_module 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # IFS=: 00:12:57.497 14:32:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:57.497 14:32:05 -- accel/accel.sh@19 -- # read -r var val 00:12:57.497 14:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:57.497 14:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:12:57.497 14:32:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:57.497 14:32:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:57.497 14:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:57.497 14:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:57.497 14:32:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:57.497 14:32:05 -- accel/accel.sh@40 -- # local IFS=, 00:12:57.497 14:32:05 -- accel/accel.sh@41 -- # jq -r . 00:12:57.497 [2024-04-17 14:32:05.873343] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:57.497 [2024-04-17 14:32:05.873481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61187 ] 00:12:57.497 [2024-04-17 14:32:06.016134] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.497 [2024-04-17 14:32:06.076499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val= 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val= 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val=0x1 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val= 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val= 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val=xor 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val=2 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val= 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val=software 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@22 -- # accel_module=software 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val=32 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val=32 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val=1 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val=Yes 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val= 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:57.801 14:32:06 -- accel/accel.sh@20 -- # val= 00:12:57.801 14:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:57.801 14:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:58.753 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:58.753 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:58.753 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:58.753 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:58.753 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:58.753 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:58.753 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:58.753 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:58.753 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:58.753 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:58.753 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:58.753 ************************************ 00:12:58.753 END TEST accel_xor 00:12:58.753 ************************************ 00:12:58.753 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:58.753 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:58.753 14:32:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:58.753 14:32:07 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:58.753 14:32:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:58.753 00:12:58.753 real 0m1.402s 00:12:58.753 user 0m1.232s 00:12:58.753 sys 0m0.075s 00:12:58.753 14:32:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:58.753 14:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:58.753 14:32:07 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:58.753 14:32:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:58.753 14:32:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.753 14:32:07 -- common/autotest_common.sh@10 -- # set +x 00:12:59.012 ************************************ 00:12:59.012 START TEST accel_xor 00:12:59.012 ************************************ 00:12:59.012 14:32:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:12:59.012 14:32:07 -- accel/accel.sh@16 -- # local accel_opc 00:12:59.012 14:32:07 -- accel/accel.sh@17 -- # local accel_module 00:12:59.012 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.012 14:32:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:59.012 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.012 14:32:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:59.012 14:32:07 -- accel/accel.sh@12 -- # build_accel_config 00:12:59.012 14:32:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:59.012 14:32:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:59.012 14:32:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.012 14:32:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.012 14:32:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:59.012 14:32:07 -- accel/accel.sh@40 -- # local IFS=, 00:12:59.012 14:32:07 -- accel/accel.sh@41 -- # jq -r . 00:12:59.012 [2024-04-17 14:32:07.385407] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:12:59.012 [2024-04-17 14:32:07.385505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61231 ] 00:12:59.012 [2024-04-17 14:32:07.521825] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.012 [2024-04-17 14:32:07.582175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val=0x1 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val=xor 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val=3 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val=software 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@22 -- # accel_module=software 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val=32 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val=32 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val=1 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val=Yes 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:12:59.271 14:32:07 -- accel/accel.sh@20 -- # val= 00:12:59.271 14:32:07 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # IFS=: 00:12:59.271 14:32:07 -- accel/accel.sh@19 -- # read -r var val 00:13:00.208 14:32:08 -- accel/accel.sh@20 -- # val= 00:13:00.209 14:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # IFS=: 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # read -r var val 00:13:00.209 14:32:08 -- accel/accel.sh@20 -- # val= 00:13:00.209 14:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # IFS=: 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # read -r var val 00:13:00.209 14:32:08 -- accel/accel.sh@20 -- # val= 00:13:00.209 14:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # IFS=: 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # read -r var val 00:13:00.209 14:32:08 -- accel/accel.sh@20 -- # val= 00:13:00.209 14:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # IFS=: 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # read -r var val 00:13:00.209 14:32:08 -- accel/accel.sh@20 -- # val= 00:13:00.209 14:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # IFS=: 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # read -r var val 00:13:00.209 14:32:08 -- accel/accel.sh@20 -- # val= 00:13:00.209 14:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # IFS=: 00:13:00.209 14:32:08 -- accel/accel.sh@19 -- # read -r var val 00:13:00.209 14:32:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:00.209 14:32:08 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:00.209 14:32:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:00.209 00:13:00.209 real 0m1.391s 00:13:00.209 user 0m1.225s 00:13:00.209 sys 0m0.071s 00:13:00.209 14:32:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:00.209 ************************************ 00:13:00.209 END TEST accel_xor 00:13:00.209 ************************************ 00:13:00.209 14:32:08 -- common/autotest_common.sh@10 -- # set +x 00:13:00.209 14:32:08 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:13:00.209 14:32:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:13:00.209 14:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.209 14:32:08 -- common/autotest_common.sh@10 -- # set +x 00:13:00.468 ************************************ 00:13:00.468 START TEST accel_dif_verify 00:13:00.468 ************************************ 00:13:00.468 14:32:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:13:00.468 14:32:08 -- accel/accel.sh@16 -- # local accel_opc 00:13:00.468 14:32:08 -- accel/accel.sh@17 -- # local accel_module 00:13:00.468 14:32:08 -- accel/accel.sh@19 -- # IFS=: 00:13:00.468 14:32:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:13:00.468 14:32:08 -- accel/accel.sh@19 -- # read -r var val 00:13:00.468 14:32:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:13:00.468 14:32:08 -- accel/accel.sh@12 -- # build_accel_config 00:13:00.468 14:32:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:00.468 14:32:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:00.468 14:32:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:00.468 14:32:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:00.468 14:32:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:00.468 14:32:08 -- accel/accel.sh@40 -- # local IFS=, 00:13:00.468 14:32:08 -- accel/accel.sh@41 -- # jq -r . 00:13:00.468 [2024-04-17 14:32:08.888403] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:00.468 [2024-04-17 14:32:08.888523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61264 ] 00:13:00.468 [2024-04-17 14:32:09.026543] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.727 [2024-04-17 14:32:09.094568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val= 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val= 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val=0x1 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val= 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val= 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val=dif_verify 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val='512 bytes' 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val='8 bytes' 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val= 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val=software 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@22 -- # accel_module=software 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val=32 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val=32 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val=1 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.727 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.727 14:32:09 -- accel/accel.sh@20 -- # val=No 00:13:00.727 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.728 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.728 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.728 14:32:09 -- accel/accel.sh@20 -- # val= 00:13:00.728 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.728 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.728 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:00.728 14:32:09 -- accel/accel.sh@20 -- # val= 00:13:00.728 14:32:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.728 14:32:09 -- accel/accel.sh@19 -- # IFS=: 00:13:00.728 14:32:09 -- accel/accel.sh@19 -- # read -r var val 00:13:02.103 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.103 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.103 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.103 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.103 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.103 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.103 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.103 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.103 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.103 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.103 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.103 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.103 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.103 14:32:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:02.103 ************************************ 00:13:02.103 END TEST accel_dif_verify 00:13:02.103 ************************************ 00:13:02.103 14:32:10 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:13:02.103 14:32:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:02.103 00:13:02.103 real 0m1.419s 00:13:02.103 user 0m1.237s 00:13:02.103 sys 0m0.086s 00:13:02.103 14:32:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.103 14:32:10 -- common/autotest_common.sh@10 -- # set +x 00:13:02.103 14:32:10 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:13:02.103 14:32:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:13:02.103 14:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.103 14:32:10 -- common/autotest_common.sh@10 -- # set +x 00:13:02.103 ************************************ 00:13:02.103 START TEST accel_dif_generate 00:13:02.103 ************************************ 00:13:02.103 14:32:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:13:02.103 14:32:10 -- accel/accel.sh@16 -- # local accel_opc 00:13:02.103 14:32:10 -- accel/accel.sh@17 -- # local accel_module 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:02.104 14:32:10 -- accel/accel.sh@12 -- # build_accel_config 00:13:02.104 14:32:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.104 14:32:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.104 14:32:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.104 14:32:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:02.104 14:32:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.104 14:32:10 -- accel/accel.sh@40 -- # local IFS=, 00:13:02.104 14:32:10 -- accel/accel.sh@41 -- # jq -r . 00:13:02.104 [2024-04-17 14:32:10.411711] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:02.104 [2024-04-17 14:32:10.411850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61308 ] 00:13:02.104 [2024-04-17 14:32:10.550169] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.104 [2024-04-17 14:32:10.614977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val=0x1 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val=dif_generate 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val='512 bytes' 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val='8 bytes' 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val=software 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@22 -- # accel_module=software 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val=32 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val=32 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val=1 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val=No 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:02.104 14:32:10 -- accel/accel.sh@20 -- # val= 00:13:02.104 14:32:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # IFS=: 00:13:02.104 14:32:10 -- accel/accel.sh@19 -- # read -r var val 00:13:03.534 14:32:11 -- accel/accel.sh@20 -- # val= 00:13:03.534 14:32:11 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # IFS=: 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # read -r var val 00:13:03.534 14:32:11 -- accel/accel.sh@20 -- # val= 00:13:03.534 14:32:11 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # IFS=: 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # read -r var val 00:13:03.534 14:32:11 -- accel/accel.sh@20 -- # val= 00:13:03.534 14:32:11 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # IFS=: 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # read -r var val 00:13:03.534 14:32:11 -- accel/accel.sh@20 -- # val= 00:13:03.534 14:32:11 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # IFS=: 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # read -r var val 00:13:03.534 14:32:11 -- accel/accel.sh@20 -- # val= 00:13:03.534 14:32:11 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # IFS=: 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # read -r var val 00:13:03.534 14:32:11 -- accel/accel.sh@20 -- # val= 00:13:03.534 14:32:11 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # IFS=: 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # read -r var val 00:13:03.534 14:32:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:03.534 14:32:11 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:03.534 14:32:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:03.534 00:13:03.534 real 0m1.413s 00:13:03.534 user 0m1.232s 00:13:03.534 sys 0m0.086s 00:13:03.534 14:32:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.534 ************************************ 00:13:03.534 END TEST accel_dif_generate 00:13:03.534 ************************************ 00:13:03.534 14:32:11 -- common/autotest_common.sh@10 -- # set +x 00:13:03.534 14:32:11 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:03.534 14:32:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:13:03.534 14:32:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.534 14:32:11 -- common/autotest_common.sh@10 -- # set +x 00:13:03.534 ************************************ 00:13:03.534 START TEST accel_dif_generate_copy 00:13:03.534 ************************************ 00:13:03.534 14:32:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:13:03.534 14:32:11 -- accel/accel.sh@16 -- # local accel_opc 00:13:03.534 14:32:11 -- accel/accel.sh@17 -- # local accel_module 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # IFS=: 00:13:03.534 14:32:11 -- accel/accel.sh@19 -- # read -r var val 00:13:03.534 14:32:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:03.534 14:32:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:03.534 14:32:11 -- accel/accel.sh@12 -- # build_accel_config 00:13:03.534 14:32:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:03.534 14:32:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:03.534 14:32:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:03.534 14:32:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:03.534 14:32:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:03.534 14:32:11 -- accel/accel.sh@40 -- # local IFS=, 00:13:03.534 14:32:11 -- accel/accel.sh@41 -- # jq -r . 00:13:03.534 [2024-04-17 14:32:11.915621] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:03.534 [2024-04-17 14:32:11.915708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61341 ] 00:13:03.534 [2024-04-17 14:32:12.052358] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.534 [2024-04-17 14:32:12.119114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val= 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val= 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val=0x1 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val= 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val= 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val= 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val=software 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@22 -- # accel_module=software 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val=32 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val=32 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val=1 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val=No 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val= 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:03.793 14:32:12 -- accel/accel.sh@20 -- # val= 00:13:03.793 14:32:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # IFS=: 00:13:03.793 14:32:12 -- accel/accel.sh@19 -- # read -r var val 00:13:04.727 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:04.727 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:04.727 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:04.727 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:04.727 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:04.727 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:04.727 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:04.727 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:04.727 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:04.727 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:04.727 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:04.727 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:04.727 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:04.727 14:32:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:04.727 14:32:13 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:04.727 14:32:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:04.727 00:13:04.727 real 0m1.402s 00:13:04.727 user 0m1.227s 00:13:04.727 sys 0m0.076s 00:13:04.727 14:32:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:04.727 ************************************ 00:13:04.727 END TEST accel_dif_generate_copy 00:13:04.727 14:32:13 -- common/autotest_common.sh@10 -- # set +x 00:13:04.727 ************************************ 00:13:04.727 14:32:13 -- accel/accel.sh@115 -- # [[ y == y ]] 00:13:04.727 14:32:13 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:04.727 14:32:13 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:13:04.727 14:32:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.727 14:32:13 -- common/autotest_common.sh@10 -- # set +x 00:13:04.986 ************************************ 00:13:04.986 START TEST accel_comp 00:13:04.986 ************************************ 00:13:04.986 14:32:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:04.986 14:32:13 -- accel/accel.sh@16 -- # local accel_opc 00:13:04.986 14:32:13 -- accel/accel.sh@17 -- # local accel_module 00:13:04.986 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:04.986 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:04.986 14:32:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:04.986 14:32:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:04.986 14:32:13 -- accel/accel.sh@12 -- # build_accel_config 00:13:04.986 14:32:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:04.986 14:32:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:04.986 14:32:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:04.986 14:32:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:04.986 14:32:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:04.986 14:32:13 -- accel/accel.sh@40 -- # local IFS=, 00:13:04.986 14:32:13 -- accel/accel.sh@41 -- # jq -r . 00:13:04.986 [2024-04-17 14:32:13.427360] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:04.986 [2024-04-17 14:32:13.427474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61384 ] 00:13:04.986 [2024-04-17 14:32:13.560140] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.245 [2024-04-17 14:32:13.618330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val=0x1 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val=compress 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@23 -- # accel_opc=compress 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val=software 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@22 -- # accel_module=software 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val=32 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val=32 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val=1 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val=No 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:05.245 14:32:13 -- accel/accel.sh@20 -- # val= 00:13:05.245 14:32:13 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # IFS=: 00:13:05.245 14:32:13 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:14 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:14 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:14 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:14 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:14 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:14 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:06.622 14:32:14 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:06.622 14:32:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:06.622 00:13:06.622 real 0m1.401s 00:13:06.622 user 0m1.222s 00:13:06.622 sys 0m0.080s 00:13:06.622 14:32:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:06.622 14:32:14 -- common/autotest_common.sh@10 -- # set +x 00:13:06.622 ************************************ 00:13:06.622 END TEST accel_comp 00:13:06.622 ************************************ 00:13:06.622 14:32:14 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:06.622 14:32:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:13:06.622 14:32:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.622 14:32:14 -- common/autotest_common.sh@10 -- # set +x 00:13:06.622 ************************************ 00:13:06.622 START TEST accel_decomp 00:13:06.622 ************************************ 00:13:06.622 14:32:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:06.622 14:32:14 -- accel/accel.sh@16 -- # local accel_opc 00:13:06.622 14:32:14 -- accel/accel.sh@17 -- # local accel_module 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:06.622 14:32:14 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:06.622 14:32:14 -- accel/accel.sh@12 -- # build_accel_config 00:13:06.622 14:32:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:06.622 14:32:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:06.622 14:32:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:06.622 14:32:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:06.622 14:32:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:06.622 14:32:14 -- accel/accel.sh@40 -- # local IFS=, 00:13:06.622 14:32:14 -- accel/accel.sh@41 -- # jq -r . 00:13:06.622 [2024-04-17 14:32:14.929390] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:06.622 [2024-04-17 14:32:14.929511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61420 ] 00:13:06.622 [2024-04-17 14:32:15.073900] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.622 [2024-04-17 14:32:15.141759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val=0x1 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val=decompress 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val=software 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@22 -- # accel_module=software 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val=32 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val=32 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val=1 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val=Yes 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:06.622 14:32:15 -- accel/accel.sh@20 -- # val= 00:13:06.622 14:32:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # IFS=: 00:13:06.622 14:32:15 -- accel/accel.sh@19 -- # read -r var val 00:13:07.995 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:07.995 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:07.995 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:07.995 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:07.995 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:07.995 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:07.995 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:07.995 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:07.995 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:07.995 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:07.995 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:07.995 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:07.995 14:32:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:07.995 14:32:16 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:07.995 14:32:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:07.995 00:13:07.995 real 0m1.420s 00:13:07.995 user 0m1.236s 00:13:07.995 sys 0m0.084s 00:13:07.995 14:32:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:07.995 14:32:16 -- common/autotest_common.sh@10 -- # set +x 00:13:07.995 ************************************ 00:13:07.995 END TEST accel_decomp 00:13:07.995 ************************************ 00:13:07.995 14:32:16 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:07.995 14:32:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:07.995 14:32:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.995 14:32:16 -- common/autotest_common.sh@10 -- # set +x 00:13:07.995 ************************************ 00:13:07.995 START TEST accel_decmop_full 00:13:07.995 ************************************ 00:13:07.995 14:32:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:07.995 14:32:16 -- accel/accel.sh@16 -- # local accel_opc 00:13:07.995 14:32:16 -- accel/accel.sh@17 -- # local accel_module 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:07.995 14:32:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:07.995 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:07.996 14:32:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:07.996 14:32:16 -- accel/accel.sh@12 -- # build_accel_config 00:13:07.996 14:32:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:07.996 14:32:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:07.996 14:32:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:07.996 14:32:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:07.996 14:32:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:07.996 14:32:16 -- accel/accel.sh@40 -- # local IFS=, 00:13:07.996 14:32:16 -- accel/accel.sh@41 -- # jq -r . 00:13:07.996 [2024-04-17 14:32:16.444785] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:07.996 [2024-04-17 14:32:16.444872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61459 ] 00:13:07.996 [2024-04-17 14:32:16.576209] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.279 [2024-04-17 14:32:16.635153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val=0x1 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val=decompress 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val=software 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@22 -- # accel_module=software 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val=32 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val=32 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val=1 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.279 14:32:16 -- accel/accel.sh@20 -- # val=Yes 00:13:08.279 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.279 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.280 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.280 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:08.280 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.280 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.280 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:08.280 14:32:16 -- accel/accel.sh@20 -- # val= 00:13:08.280 14:32:16 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.280 14:32:16 -- accel/accel.sh@19 -- # IFS=: 00:13:08.280 14:32:16 -- accel/accel.sh@19 -- # read -r var val 00:13:09.242 14:32:17 -- accel/accel.sh@20 -- # val= 00:13:09.242 14:32:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # IFS=: 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # read -r var val 00:13:09.242 14:32:17 -- accel/accel.sh@20 -- # val= 00:13:09.242 14:32:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # IFS=: 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # read -r var val 00:13:09.242 14:32:17 -- accel/accel.sh@20 -- # val= 00:13:09.242 14:32:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # IFS=: 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # read -r var val 00:13:09.242 14:32:17 -- accel/accel.sh@20 -- # val= 00:13:09.242 14:32:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # IFS=: 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # read -r var val 00:13:09.242 14:32:17 -- accel/accel.sh@20 -- # val= 00:13:09.242 14:32:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # IFS=: 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # read -r var val 00:13:09.242 14:32:17 -- accel/accel.sh@20 -- # val= 00:13:09.242 14:32:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # IFS=: 00:13:09.242 14:32:17 -- accel/accel.sh@19 -- # read -r var val 00:13:09.242 14:32:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:09.242 14:32:17 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:09.242 14:32:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:09.242 00:13:09.242 real 0m1.395s 00:13:09.242 user 0m1.232s 00:13:09.242 sys 0m0.067s 00:13:09.242 14:32:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:09.242 ************************************ 00:13:09.242 END TEST accel_decmop_full 00:13:09.242 14:32:17 -- common/autotest_common.sh@10 -- # set +x 00:13:09.242 ************************************ 00:13:09.501 14:32:17 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:09.501 14:32:17 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:09.501 14:32:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:09.501 14:32:17 -- common/autotest_common.sh@10 -- # set +x 00:13:09.501 ************************************ 00:13:09.501 START TEST accel_decomp_mcore 00:13:09.501 ************************************ 00:13:09.501 14:32:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:09.501 14:32:17 -- accel/accel.sh@16 -- # local accel_opc 00:13:09.501 14:32:17 -- accel/accel.sh@17 -- # local accel_module 00:13:09.501 14:32:17 -- accel/accel.sh@19 -- # IFS=: 00:13:09.501 14:32:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:09.501 14:32:17 -- accel/accel.sh@19 -- # read -r var val 00:13:09.501 14:32:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:09.501 14:32:17 -- accel/accel.sh@12 -- # build_accel_config 00:13:09.501 14:32:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:09.501 14:32:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:09.501 14:32:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.501 14:32:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.501 14:32:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:09.501 14:32:17 -- accel/accel.sh@40 -- # local IFS=, 00:13:09.501 14:32:17 -- accel/accel.sh@41 -- # jq -r . 00:13:09.501 [2024-04-17 14:32:17.952278] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:09.501 [2024-04-17 14:32:17.952414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:13:09.501 [2024-04-17 14:32:18.092422] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.760 [2024-04-17 14:32:18.181563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.760 [2024-04-17 14:32:18.181650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.760 [2024-04-17 14:32:18.181715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.760 [2024-04-17 14:32:18.181722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val= 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val= 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val= 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val=0xf 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val= 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val= 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val=decompress 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val= 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val=software 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@22 -- # accel_module=software 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val=32 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val=32 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val=1 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val=Yes 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val= 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:09.760 14:32:18 -- accel/accel.sh@20 -- # val= 00:13:09.760 14:32:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # IFS=: 00:13:09.760 14:32:18 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.135 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.135 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.135 14:32:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:11.135 14:32:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:11.135 14:32:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:11.135 00:13:11.135 real 0m1.453s 00:13:11.135 user 0m4.500s 00:13:11.135 sys 0m0.104s 00:13:11.135 14:32:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:11.135 14:32:19 -- common/autotest_common.sh@10 -- # set +x 00:13:11.135 ************************************ 00:13:11.135 END TEST accel_decomp_mcore 00:13:11.135 ************************************ 00:13:11.135 14:32:19 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:11.135 14:32:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:11.135 14:32:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.135 14:32:19 -- common/autotest_common.sh@10 -- # set +x 00:13:11.135 ************************************ 00:13:11.135 START TEST accel_decomp_full_mcore 00:13:11.135 ************************************ 00:13:11.136 14:32:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:11.136 14:32:19 -- accel/accel.sh@16 -- # local accel_opc 00:13:11.136 14:32:19 -- accel/accel.sh@17 -- # local accel_module 00:13:11.136 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.136 14:32:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:11.136 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.136 14:32:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:11.136 14:32:19 -- accel/accel.sh@12 -- # build_accel_config 00:13:11.136 14:32:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:11.136 14:32:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:11.136 14:32:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:11.136 14:32:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:11.136 14:32:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:11.136 14:32:19 -- accel/accel.sh@40 -- # local IFS=, 00:13:11.136 14:32:19 -- accel/accel.sh@41 -- # jq -r . 00:13:11.136 [2024-04-17 14:32:19.495387] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:11.136 [2024-04-17 14:32:19.495482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61539 ] 00:13:11.136 [2024-04-17 14:32:19.625856] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.136 [2024-04-17 14:32:19.687818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.136 [2024-04-17 14:32:19.687906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.136 [2024-04-17 14:32:19.688762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.136 [2024-04-17 14:32:19.688784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val=0xf 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val=decompress 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val=software 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@22 -- # accel_module=software 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.394 14:32:19 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:11.394 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.394 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.395 14:32:19 -- accel/accel.sh@20 -- # val=32 00:13:11.395 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.395 14:32:19 -- accel/accel.sh@20 -- # val=32 00:13:11.395 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.395 14:32:19 -- accel/accel.sh@20 -- # val=1 00:13:11.395 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.395 14:32:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:11.395 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.395 14:32:19 -- accel/accel.sh@20 -- # val=Yes 00:13:11.395 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.395 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.395 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:11.395 14:32:19 -- accel/accel.sh@20 -- # val= 00:13:11.395 14:32:19 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # IFS=: 00:13:11.395 14:32:19 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@20 -- # val= 00:13:12.328 14:32:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # IFS=: 00:13:12.328 14:32:20 -- accel/accel.sh@19 -- # read -r var val 00:13:12.328 14:32:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:12.328 14:32:20 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:12.328 14:32:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:12.328 00:13:12.328 real 0m1.445s 00:13:12.328 user 0m4.601s 00:13:12.328 sys 0m0.090s 00:13:12.328 14:32:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:12.328 14:32:20 -- common/autotest_common.sh@10 -- # set +x 00:13:12.328 ************************************ 00:13:12.328 END TEST accel_decomp_full_mcore 00:13:12.328 ************************************ 00:13:12.587 14:32:20 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:12.587 14:32:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:12.587 14:32:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.587 14:32:20 -- common/autotest_common.sh@10 -- # set +x 00:13:12.587 ************************************ 00:13:12.587 START TEST accel_decomp_mthread 00:13:12.587 ************************************ 00:13:12.587 14:32:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:12.587 14:32:21 -- accel/accel.sh@16 -- # local accel_opc 00:13:12.587 14:32:21 -- accel/accel.sh@17 -- # local accel_module 00:13:12.587 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.587 14:32:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:12.587 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.587 14:32:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:12.587 14:32:21 -- accel/accel.sh@12 -- # build_accel_config 00:13:12.587 14:32:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:12.587 14:32:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:12.587 14:32:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:12.587 14:32:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:12.587 14:32:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:12.587 14:32:21 -- accel/accel.sh@40 -- # local IFS=, 00:13:12.587 14:32:21 -- accel/accel.sh@41 -- # jq -r . 00:13:12.587 [2024-04-17 14:32:21.056543] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:12.587 [2024-04-17 14:32:21.056674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61581 ] 00:13:12.845 [2024-04-17 14:32:21.195070] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.845 [2024-04-17 14:32:21.278597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val= 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val= 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val= 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val=0x1 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val= 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val= 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val=decompress 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val= 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val=software 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@22 -- # accel_module=software 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val=32 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val=32 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val=2 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val=Yes 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val= 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:12.845 14:32:21 -- accel/accel.sh@20 -- # val= 00:13:12.845 14:32:21 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # IFS=: 00:13:12.845 14:32:21 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.231 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.231 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.231 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.231 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.231 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.231 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.231 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:14.231 14:32:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:14.231 14:32:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:14.231 00:13:14.231 real 0m1.437s 00:13:14.231 user 0m1.244s 00:13:14.231 sys 0m0.095s 00:13:14.231 14:32:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.231 14:32:22 -- common/autotest_common.sh@10 -- # set +x 00:13:14.231 ************************************ 00:13:14.231 END TEST accel_decomp_mthread 00:13:14.231 ************************************ 00:13:14.231 14:32:22 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:14.231 14:32:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:14.231 14:32:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.231 14:32:22 -- common/autotest_common.sh@10 -- # set +x 00:13:14.231 ************************************ 00:13:14.231 START TEST accel_deomp_full_mthread 00:13:14.231 ************************************ 00:13:14.231 14:32:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:14.231 14:32:22 -- accel/accel.sh@16 -- # local accel_opc 00:13:14.231 14:32:22 -- accel/accel.sh@17 -- # local accel_module 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.231 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.231 14:32:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:14.231 14:32:22 -- accel/accel.sh@12 -- # build_accel_config 00:13:14.231 14:32:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:14.231 14:32:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:14.231 14:32:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:14.231 14:32:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:14.231 14:32:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:14.231 14:32:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:14.231 14:32:22 -- accel/accel.sh@40 -- # local IFS=, 00:13:14.231 14:32:22 -- accel/accel.sh@41 -- # jq -r . 00:13:14.231 [2024-04-17 14:32:22.601599] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:14.231 [2024-04-17 14:32:22.601700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61620 ] 00:13:14.231 [2024-04-17 14:32:22.736005] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.231 [2024-04-17 14:32:22.819578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.490 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.490 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.490 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.490 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.490 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val=0x1 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val=decompress 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val=software 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@22 -- # accel_module=software 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val=32 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val=32 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val=2 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val=Yes 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:14.491 14:32:22 -- accel/accel.sh@20 -- # val= 00:13:14.491 14:32:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # IFS=: 00:13:14.491 14:32:22 -- accel/accel.sh@19 -- # read -r var val 00:13:15.866 14:32:24 -- accel/accel.sh@20 -- # val= 00:13:15.866 14:32:24 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.866 14:32:24 -- accel/accel.sh@19 -- # IFS=: 00:13:15.866 14:32:24 -- accel/accel.sh@19 -- # read -r var val 00:13:15.866 14:32:24 -- accel/accel.sh@20 -- # val= 00:13:15.866 14:32:24 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.866 14:32:24 -- accel/accel.sh@19 -- # IFS=: 00:13:15.866 14:32:24 -- accel/accel.sh@19 -- # read -r var val 00:13:15.866 14:32:24 -- accel/accel.sh@20 -- # val= 00:13:15.866 14:32:24 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.866 14:32:24 -- accel/accel.sh@19 -- # IFS=: 00:13:15.866 14:32:24 -- accel/accel.sh@19 -- # read -r var val 00:13:15.866 14:32:24 -- accel/accel.sh@20 -- # val= 00:13:15.866 14:32:24 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.867 14:32:24 -- accel/accel.sh@19 -- # IFS=: 00:13:15.867 14:32:24 -- accel/accel.sh@19 -- # read -r var val 00:13:15.867 14:32:24 -- accel/accel.sh@20 -- # val= 00:13:15.867 14:32:24 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.867 14:32:24 -- accel/accel.sh@19 -- # IFS=: 00:13:15.867 14:32:24 -- accel/accel.sh@19 -- # read -r var val 00:13:15.867 14:32:24 -- accel/accel.sh@20 -- # val= 00:13:15.867 14:32:24 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.867 14:32:24 -- accel/accel.sh@19 -- # IFS=: 00:13:15.867 14:32:24 -- accel/accel.sh@19 -- # read -r var val 00:13:15.867 14:32:24 -- accel/accel.sh@20 -- # val= 00:13:15.867 14:32:24 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.867 14:32:24 -- accel/accel.sh@19 -- # IFS=: 00:13:15.867 14:32:24 -- accel/accel.sh@19 -- # read -r var val 00:13:15.867 14:32:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:15.867 14:32:24 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:15.867 14:32:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:15.867 00:13:15.867 real 0m1.471s 00:13:15.867 user 0m1.287s 00:13:15.867 sys 0m0.081s 00:13:15.867 14:32:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:15.867 14:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:15.867 ************************************ 00:13:15.867 END TEST accel_deomp_full_mthread 00:13:15.867 ************************************ 00:13:15.867 14:32:24 -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:15.867 14:32:24 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:15.867 14:32:24 -- accel/accel.sh@137 -- # build_accel_config 00:13:15.867 14:32:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:15.867 14:32:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.867 14:32:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:15.867 14:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:15.867 14:32:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:15.867 14:32:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:15.867 14:32:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:15.867 14:32:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:15.867 14:32:24 -- accel/accel.sh@40 -- # local IFS=, 00:13:15.867 14:32:24 -- accel/accel.sh@41 -- # jq -r . 00:13:15.867 ************************************ 00:13:15.867 START TEST accel_dif_functional_tests 00:13:15.867 ************************************ 00:13:15.867 14:32:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:15.867 [2024-04-17 14:32:24.204808] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:15.867 [2024-04-17 14:32:24.204918] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61659 ] 00:13:15.867 [2024-04-17 14:32:24.341401] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.867 [2024-04-17 14:32:24.401550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.867 [2024-04-17 14:32:24.401693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.867 [2024-04-17 14:32:24.401697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.867 00:13:15.867 00:13:15.867 CUnit - A unit testing framework for C - Version 2.1-3 00:13:15.867 http://cunit.sourceforge.net/ 00:13:15.867 00:13:15.867 00:13:15.867 Suite: accel_dif 00:13:15.867 Test: verify: DIF generated, GUARD check ...passed 00:13:15.867 Test: verify: DIF generated, APPTAG check ...passed 00:13:15.867 Test: verify: DIF generated, REFTAG check ...passed 00:13:15.867 Test: verify: DIF not generated, GUARD check ...passed 00:13:15.867 Test: verify: DIF not generated, APPTAG check ...passed 00:13:15.867 Test: verify: DIF not generated, REFTAG check ...[2024-04-17 14:32:24.454645] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:15.867 [2024-04-17 14:32:24.454711] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:15.867 [2024-04-17 14:32:24.454748] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:15.867 [2024-04-17 14:32:24.454773] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:15.867 [2024-04-17 14:32:24.454795] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:15.867 [2024-04-17 14:32:24.454820] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:15.867 passed 00:13:15.867 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:15.867 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:13:15.867 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:15.867 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:15.867 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:15.867 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:13:15.867 Test: generate copy: DIF generated, GUARD check ...passed 00:13:15.867 Test: generate copy: DIF generated, APTTAG check ...[2024-04-17 14:32:24.454873] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:15.867 [2024-04-17 14:32:24.455032] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:15.867 passed 00:13:15.867 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:15.867 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:15.867 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:15.867 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:15.867 Test: generate copy: iovecs-len validate ...passed 00:13:15.867 Test: generate copy: buffer alignment validate ...passed 00:13:15.867 00:13:15.867 [2024-04-17 14:32:24.455256] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:15.867 Run Summary: Type Total Ran Passed Failed Inactive 00:13:15.867 suites 1 1 n/a 0 0 00:13:15.867 tests 20 20 20 0 0 00:13:15.867 asserts 204 204 204 0 n/a 00:13:15.867 00:13:15.867 Elapsed time = 0.002 seconds 00:13:16.125 00:13:16.125 real 0m0.478s 00:13:16.125 user 0m0.518s 00:13:16.125 sys 0m0.097s 00:13:16.125 14:32:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.125 14:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:16.125 ************************************ 00:13:16.125 END TEST accel_dif_functional_tests 00:13:16.125 ************************************ 00:13:16.125 ************************************ 00:13:16.125 END TEST accel 00:13:16.125 ************************************ 00:13:16.125 00:13:16.125 real 0m33.949s 00:13:16.125 user 0m35.274s 00:13:16.125 sys 0m3.618s 00:13:16.125 14:32:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.125 14:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:16.125 14:32:24 -- spdk/autotest.sh@179 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:16.125 14:32:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:16.125 14:32:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.125 14:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:16.383 ************************************ 00:13:16.383 START TEST accel_rpc 00:13:16.383 ************************************ 00:13:16.383 14:32:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:16.383 * Looking for test storage... 00:13:16.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:16.383 14:32:24 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:16.383 14:32:24 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61729 00:13:16.383 14:32:24 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:16.383 14:32:24 -- accel/accel_rpc.sh@15 -- # waitforlisten 61729 00:13:16.383 14:32:24 -- common/autotest_common.sh@817 -- # '[' -z 61729 ']' 00:13:16.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.383 14:32:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.383 14:32:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.383 14:32:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.383 14:32:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.383 14:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:16.383 [2024-04-17 14:32:24.916392] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:16.383 [2024-04-17 14:32:24.916514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61729 ] 00:13:16.642 [2024-04-17 14:32:25.054938] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.643 [2024-04-17 14:32:25.138827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.579 14:32:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.579 14:32:25 -- common/autotest_common.sh@850 -- # return 0 00:13:17.579 14:32:25 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:17.579 14:32:25 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:17.579 14:32:25 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:17.579 14:32:25 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:17.579 14:32:25 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:17.579 14:32:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:17.579 14:32:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.579 14:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:17.579 ************************************ 00:13:17.579 START TEST accel_assign_opcode 00:13:17.579 ************************************ 00:13:17.579 14:32:25 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:13:17.579 14:32:25 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:17.579 14:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.579 14:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:17.579 [2024-04-17 14:32:25.931515] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:17.579 14:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.579 14:32:25 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:17.579 14:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.579 14:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:17.579 [2024-04-17 14:32:25.943512] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:17.579 14:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.579 14:32:25 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:17.579 14:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.579 14:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:17.579 14:32:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.579 14:32:26 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:17.579 14:32:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.579 14:32:26 -- common/autotest_common.sh@10 -- # set +x 00:13:17.579 14:32:26 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:17.579 14:32:26 -- accel/accel_rpc.sh@42 -- # grep software 00:13:17.579 14:32:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.579 software 00:13:17.579 ************************************ 00:13:17.579 END TEST accel_assign_opcode 00:13:17.579 ************************************ 00:13:17.579 00:13:17.579 real 0m0.203s 00:13:17.579 user 0m0.054s 00:13:17.579 sys 0m0.009s 00:13:17.579 14:32:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:17.579 14:32:26 -- common/autotest_common.sh@10 -- # set +x 00:13:17.579 14:32:26 -- accel/accel_rpc.sh@55 -- # killprocess 61729 00:13:17.579 14:32:26 -- common/autotest_common.sh@936 -- # '[' -z 61729 ']' 00:13:17.579 14:32:26 -- common/autotest_common.sh@940 -- # kill -0 61729 00:13:17.579 14:32:26 -- common/autotest_common.sh@941 -- # uname 00:13:17.579 14:32:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:17.579 14:32:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61729 00:13:17.837 killing process with pid 61729 00:13:17.837 14:32:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:17.837 14:32:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:17.837 14:32:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61729' 00:13:17.837 14:32:26 -- common/autotest_common.sh@955 -- # kill 61729 00:13:17.837 14:32:26 -- common/autotest_common.sh@960 -- # wait 61729 00:13:18.107 00:13:18.107 real 0m1.695s 00:13:18.107 user 0m1.909s 00:13:18.107 sys 0m0.346s 00:13:18.107 14:32:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.107 14:32:26 -- common/autotest_common.sh@10 -- # set +x 00:13:18.107 ************************************ 00:13:18.107 END TEST accel_rpc 00:13:18.107 ************************************ 00:13:18.107 14:32:26 -- spdk/autotest.sh@180 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:18.107 14:32:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:18.107 14:32:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.107 14:32:26 -- common/autotest_common.sh@10 -- # set +x 00:13:18.107 ************************************ 00:13:18.107 START TEST app_cmdline 00:13:18.107 ************************************ 00:13:18.107 14:32:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:18.107 * Looking for test storage... 00:13:18.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:18.107 14:32:26 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:18.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.107 14:32:26 -- app/cmdline.sh@17 -- # spdk_tgt_pid=61831 00:13:18.107 14:32:26 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:18.107 14:32:26 -- app/cmdline.sh@18 -- # waitforlisten 61831 00:13:18.107 14:32:26 -- common/autotest_common.sh@817 -- # '[' -z 61831 ']' 00:13:18.107 14:32:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.107 14:32:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:18.107 14:32:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.107 14:32:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:18.107 14:32:26 -- common/autotest_common.sh@10 -- # set +x 00:13:18.379 [2024-04-17 14:32:26.720615] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:18.379 [2024-04-17 14:32:26.720738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61831 ] 00:13:18.379 [2024-04-17 14:32:26.863161] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.379 [2024-04-17 14:32:26.965692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.314 14:32:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:19.314 14:32:27 -- common/autotest_common.sh@850 -- # return 0 00:13:19.314 14:32:27 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:19.573 { 00:13:19.573 "version": "SPDK v24.05-pre git sha1 0fa934e8f", 00:13:19.573 "fields": { 00:13:19.573 "major": 24, 00:13:19.573 "minor": 5, 00:13:19.573 "patch": 0, 00:13:19.573 "suffix": "-pre", 00:13:19.573 "commit": "0fa934e8f" 00:13:19.573 } 00:13:19.573 } 00:13:19.573 14:32:27 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:19.573 14:32:27 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:19.573 14:32:27 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:19.573 14:32:27 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:19.573 14:32:27 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:19.573 14:32:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.573 14:32:27 -- common/autotest_common.sh@10 -- # set +x 00:13:19.573 14:32:27 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:19.573 14:32:27 -- app/cmdline.sh@26 -- # sort 00:13:19.573 14:32:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.573 14:32:28 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:19.573 14:32:28 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:19.573 14:32:28 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:19.573 14:32:28 -- common/autotest_common.sh@638 -- # local es=0 00:13:19.573 14:32:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:19.573 14:32:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.573 14:32:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:19.573 14:32:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.573 14:32:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:19.573 14:32:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.573 14:32:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:19.573 14:32:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.573 14:32:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:19.573 14:32:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:19.831 request: 00:13:19.831 { 00:13:19.831 "method": "env_dpdk_get_mem_stats", 00:13:19.831 "req_id": 1 00:13:19.831 } 00:13:19.831 Got JSON-RPC error response 00:13:19.831 response: 00:13:19.831 { 00:13:19.831 "code": -32601, 00:13:19.831 "message": "Method not found" 00:13:19.831 } 00:13:19.831 14:32:28 -- common/autotest_common.sh@641 -- # es=1 00:13:19.831 14:32:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:19.831 14:32:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:19.831 14:32:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:19.831 14:32:28 -- app/cmdline.sh@1 -- # killprocess 61831 00:13:19.831 14:32:28 -- common/autotest_common.sh@936 -- # '[' -z 61831 ']' 00:13:19.831 14:32:28 -- common/autotest_common.sh@940 -- # kill -0 61831 00:13:19.831 14:32:28 -- common/autotest_common.sh@941 -- # uname 00:13:19.831 14:32:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.831 14:32:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61831 00:13:19.831 killing process with pid 61831 00:13:19.831 14:32:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:19.831 14:32:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:19.831 14:32:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61831' 00:13:19.831 14:32:28 -- common/autotest_common.sh@955 -- # kill 61831 00:13:19.831 14:32:28 -- common/autotest_common.sh@960 -- # wait 61831 00:13:20.397 ************************************ 00:13:20.397 END TEST app_cmdline 00:13:20.397 ************************************ 00:13:20.397 00:13:20.397 real 0m2.137s 00:13:20.397 user 0m2.867s 00:13:20.397 sys 0m0.383s 00:13:20.397 14:32:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:20.397 14:32:28 -- common/autotest_common.sh@10 -- # set +x 00:13:20.397 14:32:28 -- spdk/autotest.sh@181 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:20.397 14:32:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:20.397 14:32:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.397 14:32:28 -- common/autotest_common.sh@10 -- # set +x 00:13:20.397 ************************************ 00:13:20.397 START TEST version 00:13:20.397 ************************************ 00:13:20.397 14:32:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:20.397 * Looking for test storage... 00:13:20.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:20.397 14:32:28 -- app/version.sh@17 -- # get_header_version major 00:13:20.397 14:32:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:20.397 14:32:28 -- app/version.sh@14 -- # cut -f2 00:13:20.397 14:32:28 -- app/version.sh@14 -- # tr -d '"' 00:13:20.397 14:32:28 -- app/version.sh@17 -- # major=24 00:13:20.397 14:32:28 -- app/version.sh@18 -- # get_header_version minor 00:13:20.397 14:32:28 -- app/version.sh@14 -- # cut -f2 00:13:20.397 14:32:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:20.397 14:32:28 -- app/version.sh@14 -- # tr -d '"' 00:13:20.397 14:32:28 -- app/version.sh@18 -- # minor=5 00:13:20.397 14:32:28 -- app/version.sh@19 -- # get_header_version patch 00:13:20.397 14:32:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:20.397 14:32:28 -- app/version.sh@14 -- # cut -f2 00:13:20.397 14:32:28 -- app/version.sh@14 -- # tr -d '"' 00:13:20.397 14:32:28 -- app/version.sh@19 -- # patch=0 00:13:20.397 14:32:28 -- app/version.sh@20 -- # get_header_version suffix 00:13:20.397 14:32:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:20.397 14:32:28 -- app/version.sh@14 -- # cut -f2 00:13:20.397 14:32:28 -- app/version.sh@14 -- # tr -d '"' 00:13:20.397 14:32:28 -- app/version.sh@20 -- # suffix=-pre 00:13:20.397 14:32:28 -- app/version.sh@22 -- # version=24.5 00:13:20.397 14:32:28 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:20.397 14:32:28 -- app/version.sh@28 -- # version=24.5rc0 00:13:20.397 14:32:28 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:20.397 14:32:28 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:20.397 14:32:28 -- app/version.sh@30 -- # py_version=24.5rc0 00:13:20.397 14:32:28 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:20.397 00:13:20.397 real 0m0.141s 00:13:20.397 user 0m0.083s 00:13:20.397 sys 0m0.086s 00:13:20.397 ************************************ 00:13:20.397 END TEST version 00:13:20.397 ************************************ 00:13:20.397 14:32:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:20.397 14:32:28 -- common/autotest_common.sh@10 -- # set +x 00:13:20.397 14:32:28 -- spdk/autotest.sh@183 -- # '[' 0 -eq 1 ']' 00:13:20.397 14:32:28 -- spdk/autotest.sh@193 -- # uname -s 00:13:20.397 14:32:28 -- spdk/autotest.sh@193 -- # [[ Linux == Linux ]] 00:13:20.397 14:32:28 -- spdk/autotest.sh@194 -- # [[ 0 -eq 1 ]] 00:13:20.397 14:32:28 -- spdk/autotest.sh@194 -- # [[ 1 -eq 1 ]] 00:13:20.397 14:32:28 -- spdk/autotest.sh@200 -- # [[ 0 -eq 0 ]] 00:13:20.397 14:32:28 -- spdk/autotest.sh@201 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:13:20.397 14:32:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:20.397 14:32:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.397 14:32:28 -- common/autotest_common.sh@10 -- # set +x 00:13:20.655 ************************************ 00:13:20.655 START TEST spdk_dd 00:13:20.655 ************************************ 00:13:20.655 14:32:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:13:20.655 * Looking for test storage... 00:13:20.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:20.655 14:32:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.655 14:32:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.655 14:32:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.655 14:32:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.655 14:32:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.655 14:32:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.655 14:32:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.655 14:32:29 -- paths/export.sh@5 -- # export PATH 00:13:20.655 14:32:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.655 14:32:29 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:20.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:20.913 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.913 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.913 14:32:29 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:13:20.913 14:32:29 -- dd/dd.sh@11 -- # nvme_in_userspace 00:13:20.913 14:32:29 -- scripts/common.sh@309 -- # local bdf bdfs 00:13:20.913 14:32:29 -- scripts/common.sh@310 -- # local nvmes 00:13:20.913 14:32:29 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:13:20.913 14:32:29 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:20.913 14:32:29 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:13:20.913 14:32:29 -- scripts/common.sh@295 -- # local bdf= 00:13:20.913 14:32:29 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:13:20.913 14:32:29 -- scripts/common.sh@230 -- # local class 00:13:20.913 14:32:29 -- scripts/common.sh@231 -- # local subclass 00:13:20.913 14:32:29 -- scripts/common.sh@232 -- # local progif 00:13:20.913 14:32:29 -- scripts/common.sh@233 -- # printf %02x 1 00:13:20.913 14:32:29 -- scripts/common.sh@233 -- # class=01 00:13:20.913 14:32:29 -- scripts/common.sh@234 -- # printf %02x 8 00:13:20.913 14:32:29 -- scripts/common.sh@234 -- # subclass=08 00:13:20.913 14:32:29 -- scripts/common.sh@235 -- # printf %02x 2 00:13:20.913 14:32:29 -- scripts/common.sh@235 -- # progif=02 00:13:20.913 14:32:29 -- scripts/common.sh@237 -- # hash lspci 00:13:20.913 14:32:29 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:13:20.913 14:32:29 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:13:20.913 14:32:29 -- scripts/common.sh@240 -- # grep -i -- -p02 00:13:20.913 14:32:29 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:20.913 14:32:29 -- scripts/common.sh@242 -- # tr -d '"' 00:13:20.913 14:32:29 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:20.913 14:32:29 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:13:20.913 14:32:29 -- scripts/common.sh@15 -- # local i 00:13:20.913 14:32:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:20.913 14:32:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:20.913 14:32:29 -- scripts/common.sh@24 -- # return 0 00:13:20.913 14:32:29 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:13:20.913 14:32:29 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:20.913 14:32:29 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:13:20.913 14:32:29 -- scripts/common.sh@15 -- # local i 00:13:20.913 14:32:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:20.913 14:32:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:20.913 14:32:29 -- scripts/common.sh@24 -- # return 0 00:13:20.914 14:32:29 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:13:20.914 14:32:29 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:20.914 14:32:29 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:20.914 14:32:29 -- scripts/common.sh@320 -- # uname -s 00:13:20.914 14:32:29 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:20.914 14:32:29 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:20.914 14:32:29 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:20.914 14:32:29 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:20.914 14:32:29 -- scripts/common.sh@320 -- # uname -s 00:13:20.914 14:32:29 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:20.914 14:32:29 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:20.914 14:32:29 -- scripts/common.sh@325 -- # (( 2 )) 00:13:20.914 14:32:29 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:20.914 14:32:29 -- dd/dd.sh@13 -- # check_liburing 00:13:20.914 14:32:29 -- dd/common.sh@139 -- # local lib so 00:13:20.914 14:32:29 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:13:20.914 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.914 14:32:29 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:13:20.914 14:32:29 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:13:21.174 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.174 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:21.175 14:32:29 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:13:21.175 14:32:29 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:13:21.175 * spdk_dd linked to liburing 00:13:21.175 14:32:29 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:21.175 14:32:29 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:21.175 14:32:29 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:21.175 14:32:29 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:21.175 14:32:29 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:21.175 14:32:29 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:21.175 14:32:29 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:21.175 14:32:29 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:21.175 14:32:29 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:21.175 14:32:29 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:21.175 14:32:29 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:21.175 14:32:29 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:21.175 14:32:29 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:21.175 14:32:29 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:21.175 14:32:29 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:21.175 14:32:29 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:21.175 14:32:29 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:21.175 14:32:29 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:21.175 14:32:29 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:21.175 14:32:29 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:21.175 14:32:29 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:21.175 14:32:29 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:21.175 14:32:29 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:21.175 14:32:29 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:21.175 14:32:29 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:21.175 14:32:29 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:21.175 14:32:29 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:21.175 14:32:29 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:21.176 14:32:29 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:21.176 14:32:29 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:21.176 14:32:29 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:21.176 14:32:29 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:21.176 14:32:29 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:21.176 14:32:29 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:21.176 14:32:29 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:21.176 14:32:29 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:21.176 14:32:29 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:21.176 14:32:29 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:21.176 14:32:29 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:21.176 14:32:29 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:21.176 14:32:29 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:21.176 14:32:29 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:21.176 14:32:29 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:21.176 14:32:29 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:21.176 14:32:29 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:21.176 14:32:29 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:21.176 14:32:29 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:21.176 14:32:29 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:13:21.176 14:32:29 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:13:21.176 14:32:29 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:21.176 14:32:29 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:13:21.176 14:32:29 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:13:21.176 14:32:29 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:13:21.176 14:32:29 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:13:21.176 14:32:29 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:13:21.176 14:32:29 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=y 00:13:21.176 14:32:29 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:13:21.176 14:32:29 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:13:21.176 14:32:29 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:13:21.176 14:32:29 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:13:21.176 14:32:29 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:13:21.176 14:32:29 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:13:21.176 14:32:29 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:13:21.176 14:32:29 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:13:21.176 14:32:29 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:13:21.176 14:32:29 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:13:21.176 14:32:29 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:13:21.176 14:32:29 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:13:21.176 14:32:29 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:13:21.176 14:32:29 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:21.176 14:32:29 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:13:21.176 14:32:29 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:13:21.176 14:32:29 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:13:21.176 14:32:29 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:13:21.176 14:32:29 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:13:21.176 14:32:29 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:13:21.176 14:32:29 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:13:21.176 14:32:29 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:13:21.176 14:32:29 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:13:21.176 14:32:29 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:13:21.176 14:32:29 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:13:21.176 14:32:29 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:21.176 14:32:29 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:13:21.176 14:32:29 -- common/build_config.sh@82 -- # CONFIG_URING=y 00:13:21.176 14:32:29 -- dd/common.sh@149 -- # [[ y != y ]] 00:13:21.176 14:32:29 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:13:21.176 14:32:29 -- dd/common.sh@156 -- # export liburing_in_use=1 00:13:21.176 14:32:29 -- dd/common.sh@156 -- # liburing_in_use=1 00:13:21.176 14:32:29 -- dd/common.sh@157 -- # return 0 00:13:21.176 14:32:29 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:13:21.176 14:32:29 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:13:21.176 14:32:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:21.176 14:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.176 14:32:29 -- common/autotest_common.sh@10 -- # set +x 00:13:21.176 ************************************ 00:13:21.176 START TEST spdk_dd_basic_rw 00:13:21.176 ************************************ 00:13:21.176 14:32:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:13:21.176 * Looking for test storage... 00:13:21.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:21.176 14:32:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.176 14:32:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.176 14:32:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.176 14:32:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.176 14:32:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.176 14:32:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.176 14:32:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.176 14:32:29 -- paths/export.sh@5 -- # export PATH 00:13:21.176 14:32:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.176 14:32:29 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:13:21.176 14:32:29 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:13:21.176 14:32:29 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:13:21.176 14:32:29 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:13:21.176 14:32:29 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:13:21.176 14:32:29 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:13:21.176 14:32:29 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:13:21.176 14:32:29 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:21.176 14:32:29 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:21.176 14:32:29 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:13:21.176 14:32:29 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:13:21.176 14:32:29 -- dd/common.sh@126 -- # mapfile -t id 00:13:21.176 14:32:29 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:13:21.436 14:32:29 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:13:21.436 14:32:29 -- dd/common.sh@130 -- # lbaf=04 00:13:21.437 14:32:29 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:13:21.437 14:32:29 -- dd/common.sh@132 -- # lbaf=4096 00:13:21.437 14:32:29 -- dd/common.sh@134 -- # echo 4096 00:13:21.437 14:32:29 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:13:21.437 14:32:29 -- dd/basic_rw.sh@96 -- # : 00:13:21.437 14:32:29 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:21.437 14:32:29 -- dd/basic_rw.sh@96 -- # gen_conf 00:13:21.437 14:32:29 -- dd/common.sh@31 -- # xtrace_disable 00:13:21.437 14:32:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:13:21.437 14:32:29 -- common/autotest_common.sh@10 -- # set +x 00:13:21.437 14:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.437 14:32:29 -- common/autotest_common.sh@10 -- # set +x 00:13:21.437 { 00:13:21.437 "subsystems": [ 00:13:21.437 { 00:13:21.437 "subsystem": "bdev", 00:13:21.437 "config": [ 00:13:21.437 { 00:13:21.437 "params": { 00:13:21.437 "trtype": "pcie", 00:13:21.437 "traddr": "0000:00:10.0", 00:13:21.437 "name": "Nvme0" 00:13:21.437 }, 00:13:21.437 "method": "bdev_nvme_attach_controller" 00:13:21.437 }, 00:13:21.437 { 00:13:21.437 "method": "bdev_wait_for_examine" 00:13:21.437 } 00:13:21.437 ] 00:13:21.437 } 00:13:21.437 ] 00:13:21.437 } 00:13:21.437 ************************************ 00:13:21.437 START TEST dd_bs_lt_native_bs 00:13:21.437 ************************************ 00:13:21.437 14:32:29 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:21.437 14:32:29 -- common/autotest_common.sh@638 -- # local es=0 00:13:21.437 14:32:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:21.437 14:32:29 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:21.437 14:32:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:21.437 14:32:29 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:21.437 14:32:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:21.437 14:32:29 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:21.437 14:32:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:21.437 14:32:29 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:21.437 14:32:29 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:21.437 14:32:29 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:21.437 [2024-04-17 14:32:30.013811] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:21.437 [2024-04-17 14:32:30.014188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62171 ] 00:13:21.696 [2024-04-17 14:32:30.153335] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.696 [2024-04-17 14:32:30.225944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.955 [2024-04-17 14:32:30.341976] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:13:21.955 [2024-04-17 14:32:30.342039] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:21.955 [2024-04-17 14:32:30.421363] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:21.955 14:32:30 -- common/autotest_common.sh@641 -- # es=234 00:13:21.955 14:32:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:21.955 14:32:30 -- common/autotest_common.sh@650 -- # es=106 00:13:21.955 14:32:30 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:21.955 14:32:30 -- common/autotest_common.sh@658 -- # es=1 00:13:21.955 14:32:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:21.955 00:13:21.955 real 0m0.585s 00:13:21.955 user 0m0.378s 00:13:21.955 sys 0m0.099s 00:13:21.955 14:32:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:21.955 ************************************ 00:13:21.955 END TEST dd_bs_lt_native_bs 00:13:21.955 ************************************ 00:13:21.955 14:32:30 -- common/autotest_common.sh@10 -- # set +x 00:13:22.214 14:32:30 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:13:22.214 14:32:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:22.214 14:32:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.214 14:32:30 -- common/autotest_common.sh@10 -- # set +x 00:13:22.214 ************************************ 00:13:22.214 START TEST dd_rw 00:13:22.214 ************************************ 00:13:22.214 14:32:30 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:13:22.215 14:32:30 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:13:22.215 14:32:30 -- dd/basic_rw.sh@12 -- # local count size 00:13:22.215 14:32:30 -- dd/basic_rw.sh@13 -- # local qds bss 00:13:22.215 14:32:30 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:13:22.215 14:32:30 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:22.215 14:32:30 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:22.215 14:32:30 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:22.215 14:32:30 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:22.215 14:32:30 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:22.215 14:32:30 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:22.215 14:32:30 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:22.215 14:32:30 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:22.215 14:32:30 -- dd/basic_rw.sh@23 -- # count=15 00:13:22.215 14:32:30 -- dd/basic_rw.sh@24 -- # count=15 00:13:22.215 14:32:30 -- dd/basic_rw.sh@25 -- # size=61440 00:13:22.215 14:32:30 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:13:22.215 14:32:30 -- dd/common.sh@98 -- # xtrace_disable 00:13:22.215 14:32:30 -- common/autotest_common.sh@10 -- # set +x 00:13:22.804 14:32:31 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:13:22.804 14:32:31 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:22.804 14:32:31 -- dd/common.sh@31 -- # xtrace_disable 00:13:22.804 14:32:31 -- common/autotest_common.sh@10 -- # set +x 00:13:23.062 [2024-04-17 14:32:31.418016] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:23.062 [2024-04-17 14:32:31.418442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:13:23.062 { 00:13:23.062 "subsystems": [ 00:13:23.062 { 00:13:23.062 "subsystem": "bdev", 00:13:23.062 "config": [ 00:13:23.062 { 00:13:23.062 "params": { 00:13:23.062 "trtype": "pcie", 00:13:23.062 "traddr": "0000:00:10.0", 00:13:23.062 "name": "Nvme0" 00:13:23.062 }, 00:13:23.062 "method": "bdev_nvme_attach_controller" 00:13:23.062 }, 00:13:23.062 { 00:13:23.062 "method": "bdev_wait_for_examine" 00:13:23.062 } 00:13:23.062 ] 00:13:23.062 } 00:13:23.062 ] 00:13:23.062 } 00:13:23.062 [2024-04-17 14:32:31.563639] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.062 [2024-04-17 14:32:31.649819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.579  Copying: 60/60 [kB] (average 29 MBps) 00:13:23.579 00:13:23.579 14:32:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:13:23.579 14:32:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:23.579 14:32:31 -- dd/common.sh@31 -- # xtrace_disable 00:13:23.579 14:32:31 -- common/autotest_common.sh@10 -- # set +x 00:13:23.579 [2024-04-17 14:32:32.021604] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:23.579 [2024-04-17 14:32:32.021702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62225 ] 00:13:23.579 { 00:13:23.579 "subsystems": [ 00:13:23.579 { 00:13:23.579 "subsystem": "bdev", 00:13:23.579 "config": [ 00:13:23.579 { 00:13:23.579 "params": { 00:13:23.579 "trtype": "pcie", 00:13:23.579 "traddr": "0000:00:10.0", 00:13:23.579 "name": "Nvme0" 00:13:23.579 }, 00:13:23.579 "method": "bdev_nvme_attach_controller" 00:13:23.579 }, 00:13:23.579 { 00:13:23.579 "method": "bdev_wait_for_examine" 00:13:23.579 } 00:13:23.579 ] 00:13:23.579 } 00:13:23.579 ] 00:13:23.579 } 00:13:23.579 [2024-04-17 14:32:32.152252] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.837 [2024-04-17 14:32:32.236459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.096  Copying: 60/60 [kB] (average 29 MBps) 00:13:24.096 00:13:24.096 14:32:32 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:24.096 14:32:32 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:13:24.096 14:32:32 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:24.096 14:32:32 -- dd/common.sh@11 -- # local nvme_ref= 00:13:24.096 14:32:32 -- dd/common.sh@12 -- # local size=61440 00:13:24.096 14:32:32 -- dd/common.sh@14 -- # local bs=1048576 00:13:24.096 14:32:32 -- dd/common.sh@15 -- # local count=1 00:13:24.096 14:32:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:24.096 14:32:32 -- dd/common.sh@18 -- # gen_conf 00:13:24.096 14:32:32 -- dd/common.sh@31 -- # xtrace_disable 00:13:24.096 14:32:32 -- common/autotest_common.sh@10 -- # set +x 00:13:24.096 [2024-04-17 14:32:32.656628] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:24.096 [2024-04-17 14:32:32.657338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62246 ] 00:13:24.096 { 00:13:24.096 "subsystems": [ 00:13:24.096 { 00:13:24.096 "subsystem": "bdev", 00:13:24.096 "config": [ 00:13:24.096 { 00:13:24.096 "params": { 00:13:24.096 "trtype": "pcie", 00:13:24.096 "traddr": "0000:00:10.0", 00:13:24.096 "name": "Nvme0" 00:13:24.096 }, 00:13:24.096 "method": "bdev_nvme_attach_controller" 00:13:24.096 }, 00:13:24.096 { 00:13:24.096 "method": "bdev_wait_for_examine" 00:13:24.096 } 00:13:24.096 ] 00:13:24.096 } 00:13:24.096 ] 00:13:24.096 } 00:13:24.355 [2024-04-17 14:32:32.795869] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.355 [2024-04-17 14:32:32.881164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.613  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:24.613 00:13:24.613 14:32:33 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:24.613 14:32:33 -- dd/basic_rw.sh@23 -- # count=15 00:13:24.613 14:32:33 -- dd/basic_rw.sh@24 -- # count=15 00:13:24.613 14:32:33 -- dd/basic_rw.sh@25 -- # size=61440 00:13:24.613 14:32:33 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:13:24.613 14:32:33 -- dd/common.sh@98 -- # xtrace_disable 00:13:24.613 14:32:33 -- common/autotest_common.sh@10 -- # set +x 00:13:25.550 14:32:33 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:13:25.550 14:32:33 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:25.550 14:32:33 -- dd/common.sh@31 -- # xtrace_disable 00:13:25.550 14:32:33 -- common/autotest_common.sh@10 -- # set +x 00:13:25.550 [2024-04-17 14:32:33.956756] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:25.550 [2024-04-17 14:32:33.957232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62265 ] 00:13:25.550 { 00:13:25.550 "subsystems": [ 00:13:25.550 { 00:13:25.550 "subsystem": "bdev", 00:13:25.550 "config": [ 00:13:25.550 { 00:13:25.550 "params": { 00:13:25.550 "trtype": "pcie", 00:13:25.550 "traddr": "0000:00:10.0", 00:13:25.550 "name": "Nvme0" 00:13:25.550 }, 00:13:25.550 "method": "bdev_nvme_attach_controller" 00:13:25.550 }, 00:13:25.550 { 00:13:25.550 "method": "bdev_wait_for_examine" 00:13:25.550 } 00:13:25.550 ] 00:13:25.550 } 00:13:25.550 ] 00:13:25.550 } 00:13:25.550 [2024-04-17 14:32:34.097683] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.810 [2024-04-17 14:32:34.155439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.070  Copying: 60/60 [kB] (average 58 MBps) 00:13:26.070 00:13:26.070 14:32:34 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:26.070 14:32:34 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:13:26.070 14:32:34 -- dd/common.sh@31 -- # xtrace_disable 00:13:26.070 14:32:34 -- common/autotest_common.sh@10 -- # set +x 00:13:26.070 [2024-04-17 14:32:34.503895] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:26.070 [2024-04-17 14:32:34.504022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62273 ] 00:13:26.070 { 00:13:26.070 "subsystems": [ 00:13:26.070 { 00:13:26.070 "subsystem": "bdev", 00:13:26.070 "config": [ 00:13:26.070 { 00:13:26.070 "params": { 00:13:26.070 "trtype": "pcie", 00:13:26.070 "traddr": "0000:00:10.0", 00:13:26.070 "name": "Nvme0" 00:13:26.070 }, 00:13:26.070 "method": "bdev_nvme_attach_controller" 00:13:26.070 }, 00:13:26.070 { 00:13:26.070 "method": "bdev_wait_for_examine" 00:13:26.070 } 00:13:26.070 ] 00:13:26.070 } 00:13:26.070 ] 00:13:26.070 } 00:13:26.070 [2024-04-17 14:32:34.639210] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.328 [2024-04-17 14:32:34.697166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.587  Copying: 60/60 [kB] (average 58 MBps) 00:13:26.587 00:13:26.588 14:32:34 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:26.588 14:32:34 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:13:26.588 14:32:34 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:26.588 14:32:34 -- dd/common.sh@11 -- # local nvme_ref= 00:13:26.588 14:32:34 -- dd/common.sh@12 -- # local size=61440 00:13:26.588 14:32:34 -- dd/common.sh@14 -- # local bs=1048576 00:13:26.588 14:32:35 -- dd/common.sh@15 -- # local count=1 00:13:26.588 14:32:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:26.588 14:32:35 -- dd/common.sh@18 -- # gen_conf 00:13:26.588 14:32:35 -- dd/common.sh@31 -- # xtrace_disable 00:13:26.588 14:32:35 -- common/autotest_common.sh@10 -- # set +x 00:13:26.588 { 00:13:26.588 "subsystems": [ 00:13:26.588 { 00:13:26.588 "subsystem": "bdev", 00:13:26.588 "config": [ 00:13:26.588 { 00:13:26.588 "params": { 00:13:26.588 "trtype": "pcie", 00:13:26.588 "traddr": "0000:00:10.0", 00:13:26.588 "name": "Nvme0" 00:13:26.588 }, 00:13:26.588 "method": "bdev_nvme_attach_controller" 00:13:26.588 }, 00:13:26.588 { 00:13:26.588 "method": "bdev_wait_for_examine" 00:13:26.588 } 00:13:26.588 ] 00:13:26.588 } 00:13:26.588 ] 00:13:26.588 } 00:13:26.588 [2024-04-17 14:32:35.053397] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:26.588 [2024-04-17 14:32:35.053511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62294 ] 00:13:26.855 [2024-04-17 14:32:35.196054] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.855 [2024-04-17 14:32:35.259076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.115  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:27.115 00:13:27.115 14:32:35 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:27.115 14:32:35 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:27.115 14:32:35 -- dd/basic_rw.sh@23 -- # count=7 00:13:27.115 14:32:35 -- dd/basic_rw.sh@24 -- # count=7 00:13:27.115 14:32:35 -- dd/basic_rw.sh@25 -- # size=57344 00:13:27.115 14:32:35 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:13:27.115 14:32:35 -- dd/common.sh@98 -- # xtrace_disable 00:13:27.115 14:32:35 -- common/autotest_common.sh@10 -- # set +x 00:13:27.683 14:32:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:13:27.683 14:32:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:27.683 14:32:36 -- dd/common.sh@31 -- # xtrace_disable 00:13:27.683 14:32:36 -- common/autotest_common.sh@10 -- # set +x 00:13:27.683 [2024-04-17 14:32:36.249280] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:27.683 [2024-04-17 14:32:36.249368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62313 ] 00:13:27.683 { 00:13:27.683 "subsystems": [ 00:13:27.683 { 00:13:27.683 "subsystem": "bdev", 00:13:27.683 "config": [ 00:13:27.683 { 00:13:27.683 "params": { 00:13:27.683 "trtype": "pcie", 00:13:27.683 "traddr": "0000:00:10.0", 00:13:27.683 "name": "Nvme0" 00:13:27.683 }, 00:13:27.683 "method": "bdev_nvme_attach_controller" 00:13:27.683 }, 00:13:27.683 { 00:13:27.684 "method": "bdev_wait_for_examine" 00:13:27.684 } 00:13:27.684 ] 00:13:27.684 } 00:13:27.684 ] 00:13:27.684 } 00:13:27.942 [2024-04-17 14:32:36.382626] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.942 [2024-04-17 14:32:36.457449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.201  Copying: 56/56 [kB] (average 54 MBps) 00:13:28.201 00:13:28.201 14:32:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:13:28.201 14:32:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:28.201 14:32:36 -- dd/common.sh@31 -- # xtrace_disable 00:13:28.201 14:32:36 -- common/autotest_common.sh@10 -- # set +x 00:13:28.459 [2024-04-17 14:32:36.827963] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:28.459 [2024-04-17 14:32:36.828065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62332 ] 00:13:28.459 { 00:13:28.459 "subsystems": [ 00:13:28.459 { 00:13:28.459 "subsystem": "bdev", 00:13:28.459 "config": [ 00:13:28.459 { 00:13:28.459 "params": { 00:13:28.459 "trtype": "pcie", 00:13:28.459 "traddr": "0000:00:10.0", 00:13:28.459 "name": "Nvme0" 00:13:28.459 }, 00:13:28.459 "method": "bdev_nvme_attach_controller" 00:13:28.459 }, 00:13:28.459 { 00:13:28.459 "method": "bdev_wait_for_examine" 00:13:28.459 } 00:13:28.459 ] 00:13:28.459 } 00:13:28.459 ] 00:13:28.459 } 00:13:28.459 [2024-04-17 14:32:36.963108] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.459 [2024-04-17 14:32:37.022272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.718  Copying: 56/56 [kB] (average 27 MBps) 00:13:28.718 00:13:28.977 14:32:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:28.977 14:32:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:13:28.977 14:32:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:28.977 14:32:37 -- dd/common.sh@11 -- # local nvme_ref= 00:13:28.977 14:32:37 -- dd/common.sh@12 -- # local size=57344 00:13:28.977 14:32:37 -- dd/common.sh@14 -- # local bs=1048576 00:13:28.977 14:32:37 -- dd/common.sh@15 -- # local count=1 00:13:28.977 14:32:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:28.977 14:32:37 -- dd/common.sh@18 -- # gen_conf 00:13:28.977 14:32:37 -- dd/common.sh@31 -- # xtrace_disable 00:13:28.977 14:32:37 -- common/autotest_common.sh@10 -- # set +x 00:13:28.977 { 00:13:28.977 "subsystems": [ 00:13:28.977 { 00:13:28.977 "subsystem": "bdev", 00:13:28.977 "config": [ 00:13:28.977 { 00:13:28.977 "params": { 00:13:28.977 "trtype": "pcie", 00:13:28.977 "traddr": "0000:00:10.0", 00:13:28.977 "name": "Nvme0" 00:13:28.977 }, 00:13:28.977 "method": "bdev_nvme_attach_controller" 00:13:28.977 }, 00:13:28.977 { 00:13:28.977 "method": "bdev_wait_for_examine" 00:13:28.977 } 00:13:28.977 ] 00:13:28.977 } 00:13:28.977 ] 00:13:28.977 } 00:13:28.977 [2024-04-17 14:32:37.381610] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:28.977 [2024-04-17 14:32:37.381705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62342 ] 00:13:28.977 [2024-04-17 14:32:37.515570] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.237 [2024-04-17 14:32:37.580407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.496  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:29.496 00:13:29.496 14:32:37 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:29.496 14:32:37 -- dd/basic_rw.sh@23 -- # count=7 00:13:29.496 14:32:37 -- dd/basic_rw.sh@24 -- # count=7 00:13:29.496 14:32:37 -- dd/basic_rw.sh@25 -- # size=57344 00:13:29.496 14:32:37 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:13:29.496 14:32:37 -- dd/common.sh@98 -- # xtrace_disable 00:13:29.496 14:32:37 -- common/autotest_common.sh@10 -- # set +x 00:13:30.065 14:32:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:13:30.065 14:32:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:30.065 14:32:38 -- dd/common.sh@31 -- # xtrace_disable 00:13:30.065 14:32:38 -- common/autotest_common.sh@10 -- # set +x 00:13:30.065 [2024-04-17 14:32:38.599366] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:30.065 [2024-04-17 14:32:38.599459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62361 ] 00:13:30.065 { 00:13:30.065 "subsystems": [ 00:13:30.065 { 00:13:30.065 "subsystem": "bdev", 00:13:30.065 "config": [ 00:13:30.065 { 00:13:30.065 "params": { 00:13:30.065 "trtype": "pcie", 00:13:30.065 "traddr": "0000:00:10.0", 00:13:30.065 "name": "Nvme0" 00:13:30.065 }, 00:13:30.065 "method": "bdev_nvme_attach_controller" 00:13:30.065 }, 00:13:30.065 { 00:13:30.065 "method": "bdev_wait_for_examine" 00:13:30.065 } 00:13:30.065 ] 00:13:30.065 } 00:13:30.065 ] 00:13:30.065 } 00:13:30.324 [2024-04-17 14:32:38.732777] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.324 [2024-04-17 14:32:38.811816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.582  Copying: 56/56 [kB] (average 54 MBps) 00:13:30.582 00:13:30.582 14:32:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:13:30.582 14:32:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:30.582 14:32:39 -- dd/common.sh@31 -- # xtrace_disable 00:13:30.582 14:32:39 -- common/autotest_common.sh@10 -- # set +x 00:13:30.582 { 00:13:30.582 "subsystems": [ 00:13:30.582 { 00:13:30.582 "subsystem": "bdev", 00:13:30.582 "config": [ 00:13:30.582 { 00:13:30.582 "params": { 00:13:30.582 "trtype": "pcie", 00:13:30.582 "traddr": "0000:00:10.0", 00:13:30.582 "name": "Nvme0" 00:13:30.582 }, 00:13:30.582 "method": "bdev_nvme_attach_controller" 00:13:30.582 }, 00:13:30.582 { 00:13:30.582 "method": "bdev_wait_for_examine" 00:13:30.582 } 00:13:30.582 ] 00:13:30.582 } 00:13:30.582 ] 00:13:30.582 } 00:13:30.842 [2024-04-17 14:32:39.186520] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:30.842 [2024-04-17 14:32:39.186649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62380 ] 00:13:30.842 [2024-04-17 14:32:39.327434] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.842 [2024-04-17 14:32:39.386803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.101  Copying: 56/56 [kB] (average 54 MBps) 00:13:31.101 00:13:31.101 14:32:39 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:31.101 14:32:39 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:13:31.101 14:32:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:31.101 14:32:39 -- dd/common.sh@11 -- # local nvme_ref= 00:13:31.101 14:32:39 -- dd/common.sh@12 -- # local size=57344 00:13:31.101 14:32:39 -- dd/common.sh@14 -- # local bs=1048576 00:13:31.101 14:32:39 -- dd/common.sh@15 -- # local count=1 00:13:31.101 14:32:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:31.101 14:32:39 -- dd/common.sh@18 -- # gen_conf 00:13:31.101 14:32:39 -- dd/common.sh@31 -- # xtrace_disable 00:13:31.101 14:32:39 -- common/autotest_common.sh@10 -- # set +x 00:13:31.360 [2024-04-17 14:32:39.735972] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:31.360 [2024-04-17 14:32:39.736056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62396 ] 00:13:31.360 { 00:13:31.360 "subsystems": [ 00:13:31.360 { 00:13:31.360 "subsystem": "bdev", 00:13:31.360 "config": [ 00:13:31.360 { 00:13:31.360 "params": { 00:13:31.360 "trtype": "pcie", 00:13:31.360 "traddr": "0000:00:10.0", 00:13:31.360 "name": "Nvme0" 00:13:31.360 }, 00:13:31.360 "method": "bdev_nvme_attach_controller" 00:13:31.360 }, 00:13:31.360 { 00:13:31.360 "method": "bdev_wait_for_examine" 00:13:31.360 } 00:13:31.360 ] 00:13:31.360 } 00:13:31.360 ] 00:13:31.360 } 00:13:31.360 [2024-04-17 14:32:39.870803] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.360 [2024-04-17 14:32:39.940648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.878  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:31.878 00:13:31.878 14:32:40 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:31.878 14:32:40 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:31.878 14:32:40 -- dd/basic_rw.sh@23 -- # count=3 00:13:31.878 14:32:40 -- dd/basic_rw.sh@24 -- # count=3 00:13:31.878 14:32:40 -- dd/basic_rw.sh@25 -- # size=49152 00:13:31.878 14:32:40 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:13:31.878 14:32:40 -- dd/common.sh@98 -- # xtrace_disable 00:13:31.879 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:13:32.446 14:32:40 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:13:32.446 14:32:40 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:32.446 14:32:40 -- dd/common.sh@31 -- # xtrace_disable 00:13:32.446 14:32:40 -- common/autotest_common.sh@10 -- # set +x 00:13:32.446 { 00:13:32.446 "subsystems": [ 00:13:32.446 { 00:13:32.446 "subsystem": "bdev", 00:13:32.446 "config": [ 00:13:32.446 { 00:13:32.446 "params": { 00:13:32.446 "trtype": "pcie", 00:13:32.446 "traddr": "0000:00:10.0", 00:13:32.446 "name": "Nvme0" 00:13:32.446 }, 00:13:32.446 "method": "bdev_nvme_attach_controller" 00:13:32.446 }, 00:13:32.446 { 00:13:32.446 "method": "bdev_wait_for_examine" 00:13:32.446 } 00:13:32.446 ] 00:13:32.446 } 00:13:32.446 ] 00:13:32.446 } 00:13:32.446 [2024-04-17 14:32:40.912826] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:32.446 [2024-04-17 14:32:40.913393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62420 ] 00:13:32.446 [2024-04-17 14:32:41.047107] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.705 [2024-04-17 14:32:41.129722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.963  Copying: 48/48 [kB] (average 46 MBps) 00:13:32.963 00:13:32.963 14:32:41 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:13:32.963 14:32:41 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:32.963 14:32:41 -- dd/common.sh@31 -- # xtrace_disable 00:13:32.963 14:32:41 -- common/autotest_common.sh@10 -- # set +x 00:13:32.963 [2024-04-17 14:32:41.470335] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:32.963 [2024-04-17 14:32:41.470437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62428 ] 00:13:32.963 { 00:13:32.963 "subsystems": [ 00:13:32.963 { 00:13:32.963 "subsystem": "bdev", 00:13:32.963 "config": [ 00:13:32.963 { 00:13:32.963 "params": { 00:13:32.963 "trtype": "pcie", 00:13:32.963 "traddr": "0000:00:10.0", 00:13:32.963 "name": "Nvme0" 00:13:32.963 }, 00:13:32.963 "method": "bdev_nvme_attach_controller" 00:13:32.963 }, 00:13:32.963 { 00:13:32.963 "method": "bdev_wait_for_examine" 00:13:32.963 } 00:13:32.963 ] 00:13:32.963 } 00:13:32.963 ] 00:13:32.963 } 00:13:33.222 [2024-04-17 14:32:41.604466] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.222 [2024-04-17 14:32:41.668164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.480  Copying: 48/48 [kB] (average 46 MBps) 00:13:33.480 00:13:33.480 14:32:41 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:33.480 14:32:41 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:13:33.480 14:32:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:33.480 14:32:41 -- dd/common.sh@11 -- # local nvme_ref= 00:13:33.480 14:32:41 -- dd/common.sh@12 -- # local size=49152 00:13:33.480 14:32:41 -- dd/common.sh@14 -- # local bs=1048576 00:13:33.480 14:32:41 -- dd/common.sh@15 -- # local count=1 00:13:33.480 14:32:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:33.480 14:32:41 -- dd/common.sh@18 -- # gen_conf 00:13:33.480 14:32:41 -- dd/common.sh@31 -- # xtrace_disable 00:13:33.480 14:32:41 -- common/autotest_common.sh@10 -- # set +x 00:13:33.480 { 00:13:33.480 "subsystems": [ 00:13:33.480 { 00:13:33.480 "subsystem": "bdev", 00:13:33.480 "config": [ 00:13:33.480 { 00:13:33.480 "params": { 00:13:33.480 "trtype": "pcie", 00:13:33.480 "traddr": "0000:00:10.0", 00:13:33.480 "name": "Nvme0" 00:13:33.480 }, 00:13:33.480 "method": "bdev_nvme_attach_controller" 00:13:33.480 }, 00:13:33.480 { 00:13:33.480 "method": "bdev_wait_for_examine" 00:13:33.480 } 00:13:33.480 ] 00:13:33.480 } 00:13:33.480 ] 00:13:33.480 } 00:13:33.480 [2024-04-17 14:32:42.028117] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:33.480 [2024-04-17 14:32:42.028243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62449 ] 00:13:33.739 [2024-04-17 14:32:42.172529] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.739 [2024-04-17 14:32:42.242635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.998  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:33.998 00:13:33.998 14:32:42 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:33.998 14:32:42 -- dd/basic_rw.sh@23 -- # count=3 00:13:33.998 14:32:42 -- dd/basic_rw.sh@24 -- # count=3 00:13:33.998 14:32:42 -- dd/basic_rw.sh@25 -- # size=49152 00:13:33.998 14:32:42 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:13:33.998 14:32:42 -- dd/common.sh@98 -- # xtrace_disable 00:13:33.998 14:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:34.566 14:32:43 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:13:34.566 14:32:43 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:34.566 14:32:43 -- dd/common.sh@31 -- # xtrace_disable 00:13:34.566 14:32:43 -- common/autotest_common.sh@10 -- # set +x 00:13:34.566 [2024-04-17 14:32:43.131102] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:34.566 [2024-04-17 14:32:43.131227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62468 ] 00:13:34.566 { 00:13:34.566 "subsystems": [ 00:13:34.566 { 00:13:34.566 "subsystem": "bdev", 00:13:34.566 "config": [ 00:13:34.566 { 00:13:34.566 "params": { 00:13:34.566 "trtype": "pcie", 00:13:34.566 "traddr": "0000:00:10.0", 00:13:34.566 "name": "Nvme0" 00:13:34.566 }, 00:13:34.566 "method": "bdev_nvme_attach_controller" 00:13:34.566 }, 00:13:34.566 { 00:13:34.566 "method": "bdev_wait_for_examine" 00:13:34.566 } 00:13:34.566 ] 00:13:34.566 } 00:13:34.566 ] 00:13:34.566 } 00:13:34.824 [2024-04-17 14:32:43.265601] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.824 [2024-04-17 14:32:43.322116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.083  Copying: 48/48 [kB] (average 46 MBps) 00:13:35.083 00:13:35.083 14:32:43 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:35.083 14:32:43 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:13:35.083 14:32:43 -- dd/common.sh@31 -- # xtrace_disable 00:13:35.083 14:32:43 -- common/autotest_common.sh@10 -- # set +x 00:13:35.083 [2024-04-17 14:32:43.663709] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:35.083 [2024-04-17 14:32:43.663831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62476 ] 00:13:35.083 { 00:13:35.083 "subsystems": [ 00:13:35.083 { 00:13:35.083 "subsystem": "bdev", 00:13:35.083 "config": [ 00:13:35.083 { 00:13:35.083 "params": { 00:13:35.083 "trtype": "pcie", 00:13:35.083 "traddr": "0000:00:10.0", 00:13:35.083 "name": "Nvme0" 00:13:35.083 }, 00:13:35.083 "method": "bdev_nvme_attach_controller" 00:13:35.083 }, 00:13:35.083 { 00:13:35.083 "method": "bdev_wait_for_examine" 00:13:35.083 } 00:13:35.083 ] 00:13:35.083 } 00:13:35.083 ] 00:13:35.083 } 00:13:35.340 [2024-04-17 14:32:43.800106] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.340 [2024-04-17 14:32:43.857365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.598  Copying: 48/48 [kB] (average 46 MBps) 00:13:35.598 00:13:35.598 14:32:44 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:35.598 14:32:44 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:13:35.598 14:32:44 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:35.598 14:32:44 -- dd/common.sh@11 -- # local nvme_ref= 00:13:35.598 14:32:44 -- dd/common.sh@12 -- # local size=49152 00:13:35.598 14:32:44 -- dd/common.sh@14 -- # local bs=1048576 00:13:35.598 14:32:44 -- dd/common.sh@15 -- # local count=1 00:13:35.598 14:32:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:35.598 14:32:44 -- dd/common.sh@18 -- # gen_conf 00:13:35.598 14:32:44 -- dd/common.sh@31 -- # xtrace_disable 00:13:35.598 14:32:44 -- common/autotest_common.sh@10 -- # set +x 00:13:35.856 [2024-04-17 14:32:44.248366] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:35.856 [2024-04-17 14:32:44.248502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62497 ] 00:13:35.856 { 00:13:35.856 "subsystems": [ 00:13:35.856 { 00:13:35.856 "subsystem": "bdev", 00:13:35.856 "config": [ 00:13:35.856 { 00:13:35.856 "params": { 00:13:35.856 "trtype": "pcie", 00:13:35.856 "traddr": "0000:00:10.0", 00:13:35.856 "name": "Nvme0" 00:13:35.856 }, 00:13:35.856 "method": "bdev_nvme_attach_controller" 00:13:35.856 }, 00:13:35.856 { 00:13:35.856 "method": "bdev_wait_for_examine" 00:13:35.856 } 00:13:35.856 ] 00:13:35.856 } 00:13:35.856 ] 00:13:35.856 } 00:13:35.856 [2024-04-17 14:32:44.389749] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.116 [2024-04-17 14:32:44.463038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.375  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:36.376 00:13:36.376 00:13:36.376 real 0m14.130s 00:13:36.376 user 0m10.905s 00:13:36.376 sys 0m3.876s 00:13:36.376 14:32:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:36.376 14:32:44 -- common/autotest_common.sh@10 -- # set +x 00:13:36.376 ************************************ 00:13:36.376 END TEST dd_rw 00:13:36.376 ************************************ 00:13:36.376 14:32:44 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:13:36.376 14:32:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:36.376 14:32:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.376 14:32:44 -- common/autotest_common.sh@10 -- # set +x 00:13:36.376 ************************************ 00:13:36.376 START TEST dd_rw_offset 00:13:36.376 ************************************ 00:13:36.376 14:32:44 -- common/autotest_common.sh@1111 -- # basic_offset 00:13:36.376 14:32:44 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:13:36.376 14:32:44 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:13:36.376 14:32:44 -- dd/common.sh@98 -- # xtrace_disable 00:13:36.376 14:32:44 -- common/autotest_common.sh@10 -- # set +x 00:13:36.376 14:32:44 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:13:36.376 14:32:44 -- dd/basic_rw.sh@56 -- # data=wymsp0asfyn39qxxjjuui2yc2vukqwhkguig99gkc29geqr7murgkg3xh852fhologfp5e44d4gfrfhoyc1ot6qtkn94ze5bvts21soqiqq90umd8c1cqusx4bu10mvsn2xjihvg500p84gwyp165nwwg6j2a7l2gmhd3ny3sp8qn8d5bnhto3ty41lfu3ui1ja3gecdzaajlqzyr1opzka1aa1p6lcr98d5n8ee2l2yvam53axvt4k041b2d2gutct2k5c65accqkvovduytx1k6u2n791y0oz2w371s3t3wamvqfj3ocv3dnlsls1hr8pd42k0bhlygshno6zbv99didt2d061i7efzff1kygqt22b8o23fh5lm2m49ipb8a72ps8j65le9i8km5twtlwssm40az038gjgf27t0hmwhs29e29ryiyjiztnenzwf9gyvj73b7gkwegcp1ii0m8t0sa0rwx728m3x4cucbvblh758sfqb0ymxn6shmmhz302vkjsmbtnn55vsx1b06kxin6vuob3wfo8j97tehccib0yfhtrdaflel8u5sqdaamgbxqdvskbdm9c87cz8n06iv864fsmqzqcis1yrh1xcdgi8t3d6mg5lsbqphvmtp36e9kd4b054f6zjq3a2lbdx5a8847sd67o50avk00agdepiithqflj9wfsihoav4qahur4d9ut0l938v5bwb9jv273m4vwgjvc6g8jfuf8nkn96b3q3m5lpmbj7907ruohtnmubci6gopwcicrwvt2iby7iza4jj9z9b7kkdztztfrf7o67mk4ijvl126ce4xwqd816r6pyso0cimefpep3ta20nwy4ij6fkwvcegmojbqyox61fmsrjz7p0m27qd9yzuo2oenuglgu3ewxz4oofkzzijps8j4zjufgrq3t8vm85j5xk2n0ltau4nccijori9ui92vzki9eqlliwgy6qwm20qq2ap3bhra7eikni0z214sk4i3ds5bo6lqpdzcwz4mds7k6w2aqwlrakmzp8dor6o13z7pknhnpztvzkicafode7zshwk8il7pefom5rc6lx7dqreh7ovo0pipf1829enojvg3i40i6afr6fi4p6h6z86w5gl427uj23gvg0z53j0yij39svp0nvl4lnj0mrmwwdtsrzrh5o39gkutyab3fu581t7v4iunxz5zdjk8ld3u22kswjchz84m4tbnkxtdbsspwkpp581zbonijzujqf7fm1zejfbc3e4dnzgaa0xyykzhkqqypo4c3ii9xkrm7yy8f9zzbcbosfhwobqe9s0020zkt1ugqt6m87bz5j8x4ocojm271bfu5c4huot5sy0v368bkyex93zdbld8xmtm5gy86k1krgnsyute3x5mq77eprw4zwz3h0pgjz98xg1f3a5t2aunwt84uopthz9evusynmw2kfi2vve18j6hpbcblxer9eqgyb2kocc6zgg6dwdru6iddbgvs712gpbkrggdbq0sm6uf6vtaoci3x3tp9uj3hwsmy70weivinxzmud5aylmo4w3j7prelqfemvho8ab1hdksi4ee6g2w914boq7a9sp5bm2evfjnj2m7betbj44jwx4vlege7leirpxtdismwss704oexgbsj9oagodj8gz7x7gqcmtbwmknrcfr082o9h4o8g6m34g3c9e4677je3onnv7ztbp2jzzh44toixtor1b1trzni7o46bh498t6g0wr04a0xmcqhpld8j543jmawlf86i6necm6md9swqqvjpjnkka6idx9narynsfgghp2ldjjtqsxj4j361ieoslqj3idhuddlstp0hasjygsvxgtnermtnsdhkeq94gnfvv9c6zx78k2ztm2i2vv2nmzbcl7epj8s6jvhw8mmnan6xkvjv8curklcrf38ssfpudouxbmo26x9qul6ivp9gfd1h64u67buqpys8lm8740ysfbespdnvc9tc5t184mn51ubrhr4104881uzjb85r779w77rscf7x64j35ng9mmtkw2j8bjir6agpqj87l9mfcjjtuvw4rt6fiinb4d2l8r8wlefmoiw7p097umm1vcdnv9osa9h4fqtam5yaysg6q9lbbs2du5v8wjyh8xyseawmqx6mwhgouw4gesvh7a6o7q8pr01os2tksq7vgf1u2hb2571cznqem27qzswlisg4src5l65pxthbutak66lusztpqfbgqfvnskx38uywjasyqrf3pujwr8begwe3aeedcn050vr1524noxulrxgw4mhbr09ab7fqwcrkfcdwckpauftr698f3p5dfzgh1s1ld19rsl79f1iofnj41aq9thd4zd66ykt18m41gee2n277jyac1ucjz0qdz24bkvx03l1sel5z6ovetmonmp7gnhy6tdszbzykarylzlm504c7y07d5i7dzjn35uq4mvwyrusy9ksd4kd30hyeb34ub3btx4tslte2v8rjgos4edyuie78t4gfxyiud521sew1lvp4gpm1egegpbdqpnrys4yiejdl6xitxko0nvk22cslvl3nlq6d9ggoaxtufj41tsycimoovs2l1cty6h20k75an3dj939xbrjukxdbwzl0e4ornnnj9tsnancv5phlu3gopxemszp1ilel5gja8t0ozkijfm76qn1k2r3qr6fe2d5xu5641h4yt7gvetxz4p2utq24fp8ywfcxuchjazfdsb9qgg0hqb0k5l4b5fn29t23vt21w79pl14wzivoju4y3ukb2ow07qk49p348i1x5x8r71n0fykbehdsm36behjyykzug15bhw8466ddnce267t2t937nz3cx95wvu5v2bqzyfrg4cmhbca9j6z1l24qf1zl3x7jrg8xdq8a46gcw9vytzkdgyre787l3mjro85h9hx01vo1mjqqh95omnn4hmm8021kcdxyhustedcq7fqin0qogrgkw7d60912znqmzvo22r5i04decc7nmbh45w6sm3i7dxyxnbuf0i3ygfip5sldxtb7x9fnyk7xvuemp6mzknfbuzwev3kb8fmeiob5usfuwbrs5ppmrfonkg286hyqnogl8e0flsepjwpwlqc6g35ep34v8lc3bokg9tors2lorqef7un81a0amj77skb5zopujnl45ob2w31kelolnak2pw6ttvn62lemb8skp2jfxfkz8alk863vn0o9mp6o2hz29x13v6bhvmlxtrc1veetqn3u5jiuosgsl6fmnbx6plmq8soj2ea6rvgsvgkzh31fmjklxwjm0n9gmbm96rl7zf9vzpt045tewqv4ulekisqhkq2w5f1i61ln0cmoa7hnnko8ppkks0gf6iqcv4h36v2f4ujll2csdzzysnn07m7nzrko1b09asi9nfkb4tetia7pgxvwwmud2zgmrsscw19rboajpvs813gf4kt0uc2fs79fhpne4xbvd0vagth32n6z0kgdn372vo5bmp1ym7qyoiva41iplwaja7ew13nc656qt9sdaovpk8ppjg4djjah6w5yjclaot183qsc87n1noprrl4swijhjf4cz9bbv6i5lw94v7az1jkgmhs3f4w6le9ebxx2x3lshwep734xj8zkkrlxywrfvwdf5cv3jk70sxqwukah48mrtt1dhgsccxf65mzucoh40yjd637q469gzy7dwc1ufi8lwcfxey2pmuzbtswwk4s95fvmgwlhi40szxwansxmnppicptreajyfzxjrzi9zki5cj8uiq54epnt6g2c5euiymbf8tibk0b7je9r22nhef0o3nv35awhr6cv45oa2amos5pv9uf90sjjjihfs3zl1jy1xeugxvwk9cada9bxzdzjqd9j2kdh9kbmbb2980gbsjk7uan977sl8f0nplol8v765gv6xdh9ei4ozwt3wh6u7yek436js75jpdp1hz9uz89rsptcq9aeb3pexgnmxltt2k8x9f2hyztdyi5uit6r5oykgqj8bfzth1lzqpo2id 00:13:36.376 14:32:44 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:13:36.376 14:32:44 -- dd/basic_rw.sh@59 -- # gen_conf 00:13:36.376 14:32:44 -- dd/common.sh@31 -- # xtrace_disable 00:13:36.376 14:32:44 -- common/autotest_common.sh@10 -- # set +x 00:13:36.635 [2024-04-17 14:32:45.001422] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:36.635 [2024-04-17 14:32:45.001523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62533 ] 00:13:36.635 { 00:13:36.635 "subsystems": [ 00:13:36.635 { 00:13:36.635 "subsystem": "bdev", 00:13:36.635 "config": [ 00:13:36.635 { 00:13:36.635 "params": { 00:13:36.635 "trtype": "pcie", 00:13:36.635 "traddr": "0000:00:10.0", 00:13:36.635 "name": "Nvme0" 00:13:36.635 }, 00:13:36.635 "method": "bdev_nvme_attach_controller" 00:13:36.635 }, 00:13:36.635 { 00:13:36.635 "method": "bdev_wait_for_examine" 00:13:36.635 } 00:13:36.635 ] 00:13:36.635 } 00:13:36.635 ] 00:13:36.635 } 00:13:36.635 [2024-04-17 14:32:45.138662] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.635 [2024-04-17 14:32:45.207047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.152  Copying: 4096/4096 [B] (average 4000 kBps) 00:13:37.152 00:13:37.152 14:32:45 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:13:37.152 14:32:45 -- dd/basic_rw.sh@65 -- # gen_conf 00:13:37.152 14:32:45 -- dd/common.sh@31 -- # xtrace_disable 00:13:37.152 14:32:45 -- common/autotest_common.sh@10 -- # set +x 00:13:37.152 [2024-04-17 14:32:45.572875] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:37.152 [2024-04-17 14:32:45.572995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62546 ] 00:13:37.152 { 00:13:37.152 "subsystems": [ 00:13:37.152 { 00:13:37.152 "subsystem": "bdev", 00:13:37.152 "config": [ 00:13:37.152 { 00:13:37.152 "params": { 00:13:37.152 "trtype": "pcie", 00:13:37.152 "traddr": "0000:00:10.0", 00:13:37.152 "name": "Nvme0" 00:13:37.152 }, 00:13:37.152 "method": "bdev_nvme_attach_controller" 00:13:37.152 }, 00:13:37.152 { 00:13:37.152 "method": "bdev_wait_for_examine" 00:13:37.152 } 00:13:37.152 ] 00:13:37.152 } 00:13:37.152 ] 00:13:37.152 } 00:13:37.152 [2024-04-17 14:32:45.705650] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.411 [2024-04-17 14:32:45.763901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.670  Copying: 4096/4096 [B] (average 4000 kBps) 00:13:37.670 00:13:37.670 14:32:46 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:13:37.671 14:32:46 -- dd/basic_rw.sh@72 -- # [[ wymsp0asfyn39qxxjjuui2yc2vukqwhkguig99gkc29geqr7murgkg3xh852fhologfp5e44d4gfrfhoyc1ot6qtkn94ze5bvts21soqiqq90umd8c1cqusx4bu10mvsn2xjihvg500p84gwyp165nwwg6j2a7l2gmhd3ny3sp8qn8d5bnhto3ty41lfu3ui1ja3gecdzaajlqzyr1opzka1aa1p6lcr98d5n8ee2l2yvam53axvt4k041b2d2gutct2k5c65accqkvovduytx1k6u2n791y0oz2w371s3t3wamvqfj3ocv3dnlsls1hr8pd42k0bhlygshno6zbv99didt2d061i7efzff1kygqt22b8o23fh5lm2m49ipb8a72ps8j65le9i8km5twtlwssm40az038gjgf27t0hmwhs29e29ryiyjiztnenzwf9gyvj73b7gkwegcp1ii0m8t0sa0rwx728m3x4cucbvblh758sfqb0ymxn6shmmhz302vkjsmbtnn55vsx1b06kxin6vuob3wfo8j97tehccib0yfhtrdaflel8u5sqdaamgbxqdvskbdm9c87cz8n06iv864fsmqzqcis1yrh1xcdgi8t3d6mg5lsbqphvmtp36e9kd4b054f6zjq3a2lbdx5a8847sd67o50avk00agdepiithqflj9wfsihoav4qahur4d9ut0l938v5bwb9jv273m4vwgjvc6g8jfuf8nkn96b3q3m5lpmbj7907ruohtnmubci6gopwcicrwvt2iby7iza4jj9z9b7kkdztztfrf7o67mk4ijvl126ce4xwqd816r6pyso0cimefpep3ta20nwy4ij6fkwvcegmojbqyox61fmsrjz7p0m27qd9yzuo2oenuglgu3ewxz4oofkzzijps8j4zjufgrq3t8vm85j5xk2n0ltau4nccijori9ui92vzki9eqlliwgy6qwm20qq2ap3bhra7eikni0z214sk4i3ds5bo6lqpdzcwz4mds7k6w2aqwlrakmzp8dor6o13z7pknhnpztvzkicafode7zshwk8il7pefom5rc6lx7dqreh7ovo0pipf1829enojvg3i40i6afr6fi4p6h6z86w5gl427uj23gvg0z53j0yij39svp0nvl4lnj0mrmwwdtsrzrh5o39gkutyab3fu581t7v4iunxz5zdjk8ld3u22kswjchz84m4tbnkxtdbsspwkpp581zbonijzujqf7fm1zejfbc3e4dnzgaa0xyykzhkqqypo4c3ii9xkrm7yy8f9zzbcbosfhwobqe9s0020zkt1ugqt6m87bz5j8x4ocojm271bfu5c4huot5sy0v368bkyex93zdbld8xmtm5gy86k1krgnsyute3x5mq77eprw4zwz3h0pgjz98xg1f3a5t2aunwt84uopthz9evusynmw2kfi2vve18j6hpbcblxer9eqgyb2kocc6zgg6dwdru6iddbgvs712gpbkrggdbq0sm6uf6vtaoci3x3tp9uj3hwsmy70weivinxzmud5aylmo4w3j7prelqfemvho8ab1hdksi4ee6g2w914boq7a9sp5bm2evfjnj2m7betbj44jwx4vlege7leirpxtdismwss704oexgbsj9oagodj8gz7x7gqcmtbwmknrcfr082o9h4o8g6m34g3c9e4677je3onnv7ztbp2jzzh44toixtor1b1trzni7o46bh498t6g0wr04a0xmcqhpld8j543jmawlf86i6necm6md9swqqvjpjnkka6idx9narynsfgghp2ldjjtqsxj4j361ieoslqj3idhuddlstp0hasjygsvxgtnermtnsdhkeq94gnfvv9c6zx78k2ztm2i2vv2nmzbcl7epj8s6jvhw8mmnan6xkvjv8curklcrf38ssfpudouxbmo26x9qul6ivp9gfd1h64u67buqpys8lm8740ysfbespdnvc9tc5t184mn51ubrhr4104881uzjb85r779w77rscf7x64j35ng9mmtkw2j8bjir6agpqj87l9mfcjjtuvw4rt6fiinb4d2l8r8wlefmoiw7p097umm1vcdnv9osa9h4fqtam5yaysg6q9lbbs2du5v8wjyh8xyseawmqx6mwhgouw4gesvh7a6o7q8pr01os2tksq7vgf1u2hb2571cznqem27qzswlisg4src5l65pxthbutak66lusztpqfbgqfvnskx38uywjasyqrf3pujwr8begwe3aeedcn050vr1524noxulrxgw4mhbr09ab7fqwcrkfcdwckpauftr698f3p5dfzgh1s1ld19rsl79f1iofnj41aq9thd4zd66ykt18m41gee2n277jyac1ucjz0qdz24bkvx03l1sel5z6ovetmonmp7gnhy6tdszbzykarylzlm504c7y07d5i7dzjn35uq4mvwyrusy9ksd4kd30hyeb34ub3btx4tslte2v8rjgos4edyuie78t4gfxyiud521sew1lvp4gpm1egegpbdqpnrys4yiejdl6xitxko0nvk22cslvl3nlq6d9ggoaxtufj41tsycimoovs2l1cty6h20k75an3dj939xbrjukxdbwzl0e4ornnnj9tsnancv5phlu3gopxemszp1ilel5gja8t0ozkijfm76qn1k2r3qr6fe2d5xu5641h4yt7gvetxz4p2utq24fp8ywfcxuchjazfdsb9qgg0hqb0k5l4b5fn29t23vt21w79pl14wzivoju4y3ukb2ow07qk49p348i1x5x8r71n0fykbehdsm36behjyykzug15bhw8466ddnce267t2t937nz3cx95wvu5v2bqzyfrg4cmhbca9j6z1l24qf1zl3x7jrg8xdq8a46gcw9vytzkdgyre787l3mjro85h9hx01vo1mjqqh95omnn4hmm8021kcdxyhustedcq7fqin0qogrgkw7d60912znqmzvo22r5i04decc7nmbh45w6sm3i7dxyxnbuf0i3ygfip5sldxtb7x9fnyk7xvuemp6mzknfbuzwev3kb8fmeiob5usfuwbrs5ppmrfonkg286hyqnogl8e0flsepjwpwlqc6g35ep34v8lc3bokg9tors2lorqef7un81a0amj77skb5zopujnl45ob2w31kelolnak2pw6ttvn62lemb8skp2jfxfkz8alk863vn0o9mp6o2hz29x13v6bhvmlxtrc1veetqn3u5jiuosgsl6fmnbx6plmq8soj2ea6rvgsvgkzh31fmjklxwjm0n9gmbm96rl7zf9vzpt045tewqv4ulekisqhkq2w5f1i61ln0cmoa7hnnko8ppkks0gf6iqcv4h36v2f4ujll2csdzzysnn07m7nzrko1b09asi9nfkb4tetia7pgxvwwmud2zgmrsscw19rboajpvs813gf4kt0uc2fs79fhpne4xbvd0vagth32n6z0kgdn372vo5bmp1ym7qyoiva41iplwaja7ew13nc656qt9sdaovpk8ppjg4djjah6w5yjclaot183qsc87n1noprrl4swijhjf4cz9bbv6i5lw94v7az1jkgmhs3f4w6le9ebxx2x3lshwep734xj8zkkrlxywrfvwdf5cv3jk70sxqwukah48mrtt1dhgsccxf65mzucoh40yjd637q469gzy7dwc1ufi8lwcfxey2pmuzbtswwk4s95fvmgwlhi40szxwansxmnppicptreajyfzxjrzi9zki5cj8uiq54epnt6g2c5euiymbf8tibk0b7je9r22nhef0o3nv35awhr6cv45oa2amos5pv9uf90sjjjihfs3zl1jy1xeugxvwk9cada9bxzdzjqd9j2kdh9kbmbb2980gbsjk7uan977sl8f0nplol8v765gv6xdh9ei4ozwt3wh6u7yek436js75jpdp1hz9uz89rsptcq9aeb3pexgnmxltt2k8x9f2hyztdyi5uit6r5oykgqj8bfzth1lzqpo2id == \w\y\m\s\p\0\a\s\f\y\n\3\9\q\x\x\j\j\u\u\i\2\y\c\2\v\u\k\q\w\h\k\g\u\i\g\9\9\g\k\c\2\9\g\e\q\r\7\m\u\r\g\k\g\3\x\h\8\5\2\f\h\o\l\o\g\f\p\5\e\4\4\d\4\g\f\r\f\h\o\y\c\1\o\t\6\q\t\k\n\9\4\z\e\5\b\v\t\s\2\1\s\o\q\i\q\q\9\0\u\m\d\8\c\1\c\q\u\s\x\4\b\u\1\0\m\v\s\n\2\x\j\i\h\v\g\5\0\0\p\8\4\g\w\y\p\1\6\5\n\w\w\g\6\j\2\a\7\l\2\g\m\h\d\3\n\y\3\s\p\8\q\n\8\d\5\b\n\h\t\o\3\t\y\4\1\l\f\u\3\u\i\1\j\a\3\g\e\c\d\z\a\a\j\l\q\z\y\r\1\o\p\z\k\a\1\a\a\1\p\6\l\c\r\9\8\d\5\n\8\e\e\2\l\2\y\v\a\m\5\3\a\x\v\t\4\k\0\4\1\b\2\d\2\g\u\t\c\t\2\k\5\c\6\5\a\c\c\q\k\v\o\v\d\u\y\t\x\1\k\6\u\2\n\7\9\1\y\0\o\z\2\w\3\7\1\s\3\t\3\w\a\m\v\q\f\j\3\o\c\v\3\d\n\l\s\l\s\1\h\r\8\p\d\4\2\k\0\b\h\l\y\g\s\h\n\o\6\z\b\v\9\9\d\i\d\t\2\d\0\6\1\i\7\e\f\z\f\f\1\k\y\g\q\t\2\2\b\8\o\2\3\f\h\5\l\m\2\m\4\9\i\p\b\8\a\7\2\p\s\8\j\6\5\l\e\9\i\8\k\m\5\t\w\t\l\w\s\s\m\4\0\a\z\0\3\8\g\j\g\f\2\7\t\0\h\m\w\h\s\2\9\e\2\9\r\y\i\y\j\i\z\t\n\e\n\z\w\f\9\g\y\v\j\7\3\b\7\g\k\w\e\g\c\p\1\i\i\0\m\8\t\0\s\a\0\r\w\x\7\2\8\m\3\x\4\c\u\c\b\v\b\l\h\7\5\8\s\f\q\b\0\y\m\x\n\6\s\h\m\m\h\z\3\0\2\v\k\j\s\m\b\t\n\n\5\5\v\s\x\1\b\0\6\k\x\i\n\6\v\u\o\b\3\w\f\o\8\j\9\7\t\e\h\c\c\i\b\0\y\f\h\t\r\d\a\f\l\e\l\8\u\5\s\q\d\a\a\m\g\b\x\q\d\v\s\k\b\d\m\9\c\8\7\c\z\8\n\0\6\i\v\8\6\4\f\s\m\q\z\q\c\i\s\1\y\r\h\1\x\c\d\g\i\8\t\3\d\6\m\g\5\l\s\b\q\p\h\v\m\t\p\3\6\e\9\k\d\4\b\0\5\4\f\6\z\j\q\3\a\2\l\b\d\x\5\a\8\8\4\7\s\d\6\7\o\5\0\a\v\k\0\0\a\g\d\e\p\i\i\t\h\q\f\l\j\9\w\f\s\i\h\o\a\v\4\q\a\h\u\r\4\d\9\u\t\0\l\9\3\8\v\5\b\w\b\9\j\v\2\7\3\m\4\v\w\g\j\v\c\6\g\8\j\f\u\f\8\n\k\n\9\6\b\3\q\3\m\5\l\p\m\b\j\7\9\0\7\r\u\o\h\t\n\m\u\b\c\i\6\g\o\p\w\c\i\c\r\w\v\t\2\i\b\y\7\i\z\a\4\j\j\9\z\9\b\7\k\k\d\z\t\z\t\f\r\f\7\o\6\7\m\k\4\i\j\v\l\1\2\6\c\e\4\x\w\q\d\8\1\6\r\6\p\y\s\o\0\c\i\m\e\f\p\e\p\3\t\a\2\0\n\w\y\4\i\j\6\f\k\w\v\c\e\g\m\o\j\b\q\y\o\x\6\1\f\m\s\r\j\z\7\p\0\m\2\7\q\d\9\y\z\u\o\2\o\e\n\u\g\l\g\u\3\e\w\x\z\4\o\o\f\k\z\z\i\j\p\s\8\j\4\z\j\u\f\g\r\q\3\t\8\v\m\8\5\j\5\x\k\2\n\0\l\t\a\u\4\n\c\c\i\j\o\r\i\9\u\i\9\2\v\z\k\i\9\e\q\l\l\i\w\g\y\6\q\w\m\2\0\q\q\2\a\p\3\b\h\r\a\7\e\i\k\n\i\0\z\2\1\4\s\k\4\i\3\d\s\5\b\o\6\l\q\p\d\z\c\w\z\4\m\d\s\7\k\6\w\2\a\q\w\l\r\a\k\m\z\p\8\d\o\r\6\o\1\3\z\7\p\k\n\h\n\p\z\t\v\z\k\i\c\a\f\o\d\e\7\z\s\h\w\k\8\i\l\7\p\e\f\o\m\5\r\c\6\l\x\7\d\q\r\e\h\7\o\v\o\0\p\i\p\f\1\8\2\9\e\n\o\j\v\g\3\i\4\0\i\6\a\f\r\6\f\i\4\p\6\h\6\z\8\6\w\5\g\l\4\2\7\u\j\2\3\g\v\g\0\z\5\3\j\0\y\i\j\3\9\s\v\p\0\n\v\l\4\l\n\j\0\m\r\m\w\w\d\t\s\r\z\r\h\5\o\3\9\g\k\u\t\y\a\b\3\f\u\5\8\1\t\7\v\4\i\u\n\x\z\5\z\d\j\k\8\l\d\3\u\2\2\k\s\w\j\c\h\z\8\4\m\4\t\b\n\k\x\t\d\b\s\s\p\w\k\p\p\5\8\1\z\b\o\n\i\j\z\u\j\q\f\7\f\m\1\z\e\j\f\b\c\3\e\4\d\n\z\g\a\a\0\x\y\y\k\z\h\k\q\q\y\p\o\4\c\3\i\i\9\x\k\r\m\7\y\y\8\f\9\z\z\b\c\b\o\s\f\h\w\o\b\q\e\9\s\0\0\2\0\z\k\t\1\u\g\q\t\6\m\8\7\b\z\5\j\8\x\4\o\c\o\j\m\2\7\1\b\f\u\5\c\4\h\u\o\t\5\s\y\0\v\3\6\8\b\k\y\e\x\9\3\z\d\b\l\d\8\x\m\t\m\5\g\y\8\6\k\1\k\r\g\n\s\y\u\t\e\3\x\5\m\q\7\7\e\p\r\w\4\z\w\z\3\h\0\p\g\j\z\9\8\x\g\1\f\3\a\5\t\2\a\u\n\w\t\8\4\u\o\p\t\h\z\9\e\v\u\s\y\n\m\w\2\k\f\i\2\v\v\e\1\8\j\6\h\p\b\c\b\l\x\e\r\9\e\q\g\y\b\2\k\o\c\c\6\z\g\g\6\d\w\d\r\u\6\i\d\d\b\g\v\s\7\1\2\g\p\b\k\r\g\g\d\b\q\0\s\m\6\u\f\6\v\t\a\o\c\i\3\x\3\t\p\9\u\j\3\h\w\s\m\y\7\0\w\e\i\v\i\n\x\z\m\u\d\5\a\y\l\m\o\4\w\3\j\7\p\r\e\l\q\f\e\m\v\h\o\8\a\b\1\h\d\k\s\i\4\e\e\6\g\2\w\9\1\4\b\o\q\7\a\9\s\p\5\b\m\2\e\v\f\j\n\j\2\m\7\b\e\t\b\j\4\4\j\w\x\4\v\l\e\g\e\7\l\e\i\r\p\x\t\d\i\s\m\w\s\s\7\0\4\o\e\x\g\b\s\j\9\o\a\g\o\d\j\8\g\z\7\x\7\g\q\c\m\t\b\w\m\k\n\r\c\f\r\0\8\2\o\9\h\4\o\8\g\6\m\3\4\g\3\c\9\e\4\6\7\7\j\e\3\o\n\n\v\7\z\t\b\p\2\j\z\z\h\4\4\t\o\i\x\t\o\r\1\b\1\t\r\z\n\i\7\o\4\6\b\h\4\9\8\t\6\g\0\w\r\0\4\a\0\x\m\c\q\h\p\l\d\8\j\5\4\3\j\m\a\w\l\f\8\6\i\6\n\e\c\m\6\m\d\9\s\w\q\q\v\j\p\j\n\k\k\a\6\i\d\x\9\n\a\r\y\n\s\f\g\g\h\p\2\l\d\j\j\t\q\s\x\j\4\j\3\6\1\i\e\o\s\l\q\j\3\i\d\h\u\d\d\l\s\t\p\0\h\a\s\j\y\g\s\v\x\g\t\n\e\r\m\t\n\s\d\h\k\e\q\9\4\g\n\f\v\v\9\c\6\z\x\7\8\k\2\z\t\m\2\i\2\v\v\2\n\m\z\b\c\l\7\e\p\j\8\s\6\j\v\h\w\8\m\m\n\a\n\6\x\k\v\j\v\8\c\u\r\k\l\c\r\f\3\8\s\s\f\p\u\d\o\u\x\b\m\o\2\6\x\9\q\u\l\6\i\v\p\9\g\f\d\1\h\6\4\u\6\7\b\u\q\p\y\s\8\l\m\8\7\4\0\y\s\f\b\e\s\p\d\n\v\c\9\t\c\5\t\1\8\4\m\n\5\1\u\b\r\h\r\4\1\0\4\8\8\1\u\z\j\b\8\5\r\7\7\9\w\7\7\r\s\c\f\7\x\6\4\j\3\5\n\g\9\m\m\t\k\w\2\j\8\b\j\i\r\6\a\g\p\q\j\8\7\l\9\m\f\c\j\j\t\u\v\w\4\r\t\6\f\i\i\n\b\4\d\2\l\8\r\8\w\l\e\f\m\o\i\w\7\p\0\9\7\u\m\m\1\v\c\d\n\v\9\o\s\a\9\h\4\f\q\t\a\m\5\y\a\y\s\g\6\q\9\l\b\b\s\2\d\u\5\v\8\w\j\y\h\8\x\y\s\e\a\w\m\q\x\6\m\w\h\g\o\u\w\4\g\e\s\v\h\7\a\6\o\7\q\8\p\r\0\1\o\s\2\t\k\s\q\7\v\g\f\1\u\2\h\b\2\5\7\1\c\z\n\q\e\m\2\7\q\z\s\w\l\i\s\g\4\s\r\c\5\l\6\5\p\x\t\h\b\u\t\a\k\6\6\l\u\s\z\t\p\q\f\b\g\q\f\v\n\s\k\x\3\8\u\y\w\j\a\s\y\q\r\f\3\p\u\j\w\r\8\b\e\g\w\e\3\a\e\e\d\c\n\0\5\0\v\r\1\5\2\4\n\o\x\u\l\r\x\g\w\4\m\h\b\r\0\9\a\b\7\f\q\w\c\r\k\f\c\d\w\c\k\p\a\u\f\t\r\6\9\8\f\3\p\5\d\f\z\g\h\1\s\1\l\d\1\9\r\s\l\7\9\f\1\i\o\f\n\j\4\1\a\q\9\t\h\d\4\z\d\6\6\y\k\t\1\8\m\4\1\g\e\e\2\n\2\7\7\j\y\a\c\1\u\c\j\z\0\q\d\z\2\4\b\k\v\x\0\3\l\1\s\e\l\5\z\6\o\v\e\t\m\o\n\m\p\7\g\n\h\y\6\t\d\s\z\b\z\y\k\a\r\y\l\z\l\m\5\0\4\c\7\y\0\7\d\5\i\7\d\z\j\n\3\5\u\q\4\m\v\w\y\r\u\s\y\9\k\s\d\4\k\d\3\0\h\y\e\b\3\4\u\b\3\b\t\x\4\t\s\l\t\e\2\v\8\r\j\g\o\s\4\e\d\y\u\i\e\7\8\t\4\g\f\x\y\i\u\d\5\2\1\s\e\w\1\l\v\p\4\g\p\m\1\e\g\e\g\p\b\d\q\p\n\r\y\s\4\y\i\e\j\d\l\6\x\i\t\x\k\o\0\n\v\k\2\2\c\s\l\v\l\3\n\l\q\6\d\9\g\g\o\a\x\t\u\f\j\4\1\t\s\y\c\i\m\o\o\v\s\2\l\1\c\t\y\6\h\2\0\k\7\5\a\n\3\d\j\9\3\9\x\b\r\j\u\k\x\d\b\w\z\l\0\e\4\o\r\n\n\n\j\9\t\s\n\a\n\c\v\5\p\h\l\u\3\g\o\p\x\e\m\s\z\p\1\i\l\e\l\5\g\j\a\8\t\0\o\z\k\i\j\f\m\7\6\q\n\1\k\2\r\3\q\r\6\f\e\2\d\5\x\u\5\6\4\1\h\4\y\t\7\g\v\e\t\x\z\4\p\2\u\t\q\2\4\f\p\8\y\w\f\c\x\u\c\h\j\a\z\f\d\s\b\9\q\g\g\0\h\q\b\0\k\5\l\4\b\5\f\n\2\9\t\2\3\v\t\2\1\w\7\9\p\l\1\4\w\z\i\v\o\j\u\4\y\3\u\k\b\2\o\w\0\7\q\k\4\9\p\3\4\8\i\1\x\5\x\8\r\7\1\n\0\f\y\k\b\e\h\d\s\m\3\6\b\e\h\j\y\y\k\z\u\g\1\5\b\h\w\8\4\6\6\d\d\n\c\e\2\6\7\t\2\t\9\3\7\n\z\3\c\x\9\5\w\v\u\5\v\2\b\q\z\y\f\r\g\4\c\m\h\b\c\a\9\j\6\z\1\l\2\4\q\f\1\z\l\3\x\7\j\r\g\8\x\d\q\8\a\4\6\g\c\w\9\v\y\t\z\k\d\g\y\r\e\7\8\7\l\3\m\j\r\o\8\5\h\9\h\x\0\1\v\o\1\m\j\q\q\h\9\5\o\m\n\n\4\h\m\m\8\0\2\1\k\c\d\x\y\h\u\s\t\e\d\c\q\7\f\q\i\n\0\q\o\g\r\g\k\w\7\d\6\0\9\1\2\z\n\q\m\z\v\o\2\2\r\5\i\0\4\d\e\c\c\7\n\m\b\h\4\5\w\6\s\m\3\i\7\d\x\y\x\n\b\u\f\0\i\3\y\g\f\i\p\5\s\l\d\x\t\b\7\x\9\f\n\y\k\7\x\v\u\e\m\p\6\m\z\k\n\f\b\u\z\w\e\v\3\k\b\8\f\m\e\i\o\b\5\u\s\f\u\w\b\r\s\5\p\p\m\r\f\o\n\k\g\2\8\6\h\y\q\n\o\g\l\8\e\0\f\l\s\e\p\j\w\p\w\l\q\c\6\g\3\5\e\p\3\4\v\8\l\c\3\b\o\k\g\9\t\o\r\s\2\l\o\r\q\e\f\7\u\n\8\1\a\0\a\m\j\7\7\s\k\b\5\z\o\p\u\j\n\l\4\5\o\b\2\w\3\1\k\e\l\o\l\n\a\k\2\p\w\6\t\t\v\n\6\2\l\e\m\b\8\s\k\p\2\j\f\x\f\k\z\8\a\l\k\8\6\3\v\n\0\o\9\m\p\6\o\2\h\z\2\9\x\1\3\v\6\b\h\v\m\l\x\t\r\c\1\v\e\e\t\q\n\3\u\5\j\i\u\o\s\g\s\l\6\f\m\n\b\x\6\p\l\m\q\8\s\o\j\2\e\a\6\r\v\g\s\v\g\k\z\h\3\1\f\m\j\k\l\x\w\j\m\0\n\9\g\m\b\m\9\6\r\l\7\z\f\9\v\z\p\t\0\4\5\t\e\w\q\v\4\u\l\e\k\i\s\q\h\k\q\2\w\5\f\1\i\6\1\l\n\0\c\m\o\a\7\h\n\n\k\o\8\p\p\k\k\s\0\g\f\6\i\q\c\v\4\h\3\6\v\2\f\4\u\j\l\l\2\c\s\d\z\z\y\s\n\n\0\7\m\7\n\z\r\k\o\1\b\0\9\a\s\i\9\n\f\k\b\4\t\e\t\i\a\7\p\g\x\v\w\w\m\u\d\2\z\g\m\r\s\s\c\w\1\9\r\b\o\a\j\p\v\s\8\1\3\g\f\4\k\t\0\u\c\2\f\s\7\9\f\h\p\n\e\4\x\b\v\d\0\v\a\g\t\h\3\2\n\6\z\0\k\g\d\n\3\7\2\v\o\5\b\m\p\1\y\m\7\q\y\o\i\v\a\4\1\i\p\l\w\a\j\a\7\e\w\1\3\n\c\6\5\6\q\t\9\s\d\a\o\v\p\k\8\p\p\j\g\4\d\j\j\a\h\6\w\5\y\j\c\l\a\o\t\1\8\3\q\s\c\8\7\n\1\n\o\p\r\r\l\4\s\w\i\j\h\j\f\4\c\z\9\b\b\v\6\i\5\l\w\9\4\v\7\a\z\1\j\k\g\m\h\s\3\f\4\w\6\l\e\9\e\b\x\x\2\x\3\l\s\h\w\e\p\7\3\4\x\j\8\z\k\k\r\l\x\y\w\r\f\v\w\d\f\5\c\v\3\j\k\7\0\s\x\q\w\u\k\a\h\4\8\m\r\t\t\1\d\h\g\s\c\c\x\f\6\5\m\z\u\c\o\h\4\0\y\j\d\6\3\7\q\4\6\9\g\z\y\7\d\w\c\1\u\f\i\8\l\w\c\f\x\e\y\2\p\m\u\z\b\t\s\w\w\k\4\s\9\5\f\v\m\g\w\l\h\i\4\0\s\z\x\w\a\n\s\x\m\n\p\p\i\c\p\t\r\e\a\j\y\f\z\x\j\r\z\i\9\z\k\i\5\c\j\8\u\i\q\5\4\e\p\n\t\6\g\2\c\5\e\u\i\y\m\b\f\8\t\i\b\k\0\b\7\j\e\9\r\2\2\n\h\e\f\0\o\3\n\v\3\5\a\w\h\r\6\c\v\4\5\o\a\2\a\m\o\s\5\p\v\9\u\f\9\0\s\j\j\j\i\h\f\s\3\z\l\1\j\y\1\x\e\u\g\x\v\w\k\9\c\a\d\a\9\b\x\z\d\z\j\q\d\9\j\2\k\d\h\9\k\b\m\b\b\2\9\8\0\g\b\s\j\k\7\u\a\n\9\7\7\s\l\8\f\0\n\p\l\o\l\8\v\7\6\5\g\v\6\x\d\h\9\e\i\4\o\z\w\t\3\w\h\6\u\7\y\e\k\4\3\6\j\s\7\5\j\p\d\p\1\h\z\9\u\z\8\9\r\s\p\t\c\q\9\a\e\b\3\p\e\x\g\n\m\x\l\t\t\2\k\8\x\9\f\2\h\y\z\t\d\y\i\5\u\i\t\6\r\5\o\y\k\g\q\j\8\b\f\z\t\h\1\l\z\q\p\o\2\i\d ]] 00:13:37.671 00:13:37.671 real 0m1.165s 00:13:37.671 user 0m0.863s 00:13:37.671 sys 0m0.400s 00:13:37.671 14:32:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:37.671 14:32:46 -- common/autotest_common.sh@10 -- # set +x 00:13:37.671 ************************************ 00:13:37.671 END TEST dd_rw_offset 00:13:37.671 ************************************ 00:13:37.671 14:32:46 -- dd/basic_rw.sh@1 -- # cleanup 00:13:37.671 14:32:46 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:13:37.671 14:32:46 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:37.671 14:32:46 -- dd/common.sh@11 -- # local nvme_ref= 00:13:37.671 14:32:46 -- dd/common.sh@12 -- # local size=0xffff 00:13:37.671 14:32:46 -- dd/common.sh@14 -- # local bs=1048576 00:13:37.671 14:32:46 -- dd/common.sh@15 -- # local count=1 00:13:37.671 14:32:46 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:37.671 14:32:46 -- dd/common.sh@18 -- # gen_conf 00:13:37.671 14:32:46 -- dd/common.sh@31 -- # xtrace_disable 00:13:37.671 14:32:46 -- common/autotest_common.sh@10 -- # set +x 00:13:37.671 [2024-04-17 14:32:46.148656] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:37.671 [2024-04-17 14:32:46.148741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62580 ] 00:13:37.671 { 00:13:37.671 "subsystems": [ 00:13:37.671 { 00:13:37.671 "subsystem": "bdev", 00:13:37.671 "config": [ 00:13:37.671 { 00:13:37.671 "params": { 00:13:37.671 "trtype": "pcie", 00:13:37.671 "traddr": "0000:00:10.0", 00:13:37.671 "name": "Nvme0" 00:13:37.671 }, 00:13:37.671 "method": "bdev_nvme_attach_controller" 00:13:37.671 }, 00:13:37.671 { 00:13:37.671 "method": "bdev_wait_for_examine" 00:13:37.671 } 00:13:37.671 ] 00:13:37.671 } 00:13:37.671 ] 00:13:37.671 } 00:13:37.931 [2024-04-17 14:32:46.278835] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.931 [2024-04-17 14:32:46.347418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.189  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:38.189 00:13:38.189 14:32:46 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:38.189 00:13:38.189 real 0m17.047s 00:13:38.189 user 0m12.812s 00:13:38.189 sys 0m4.832s 00:13:38.189 14:32:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:38.189 14:32:46 -- common/autotest_common.sh@10 -- # set +x 00:13:38.189 ************************************ 00:13:38.189 END TEST spdk_dd_basic_rw 00:13:38.189 ************************************ 00:13:38.189 14:32:46 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:13:38.189 14:32:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:38.189 14:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.189 14:32:46 -- common/autotest_common.sh@10 -- # set +x 00:13:38.189 ************************************ 00:13:38.189 START TEST spdk_dd_posix 00:13:38.189 ************************************ 00:13:38.189 14:32:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:13:38.448 * Looking for test storage... 00:13:38.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:38.448 14:32:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.448 14:32:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.448 14:32:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.448 14:32:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.448 14:32:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.448 14:32:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.448 14:32:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.448 14:32:46 -- paths/export.sh@5 -- # export PATH 00:13:38.448 14:32:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.448 14:32:46 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:13:38.448 14:32:46 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:13:38.448 14:32:46 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:13:38.448 14:32:46 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:13:38.448 14:32:46 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:38.448 14:32:46 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:38.448 14:32:46 -- dd/posix.sh@130 -- # tests 00:13:38.448 14:32:46 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:13:38.448 * First test run, liburing in use 00:13:38.448 14:32:46 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:13:38.448 14:32:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:38.448 14:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.448 14:32:46 -- common/autotest_common.sh@10 -- # set +x 00:13:38.448 ************************************ 00:13:38.448 START TEST dd_flag_append 00:13:38.448 ************************************ 00:13:38.448 14:32:46 -- common/autotest_common.sh@1111 -- # append 00:13:38.448 14:32:46 -- dd/posix.sh@16 -- # local dump0 00:13:38.448 14:32:46 -- dd/posix.sh@17 -- # local dump1 00:13:38.448 14:32:46 -- dd/posix.sh@19 -- # gen_bytes 32 00:13:38.448 14:32:46 -- dd/common.sh@98 -- # xtrace_disable 00:13:38.448 14:32:46 -- common/autotest_common.sh@10 -- # set +x 00:13:38.448 14:32:46 -- dd/posix.sh@19 -- # dump0=schpvuae2hrmrdcgdr41okudb7rpwork 00:13:38.448 14:32:46 -- dd/posix.sh@20 -- # gen_bytes 32 00:13:38.448 14:32:46 -- dd/common.sh@98 -- # xtrace_disable 00:13:38.448 14:32:46 -- common/autotest_common.sh@10 -- # set +x 00:13:38.448 14:32:46 -- dd/posix.sh@20 -- # dump1=yxcmgop4lbbg4ccjdt0sju2lshayoij3 00:13:38.448 14:32:46 -- dd/posix.sh@22 -- # printf %s schpvuae2hrmrdcgdr41okudb7rpwork 00:13:38.448 14:32:46 -- dd/posix.sh@23 -- # printf %s yxcmgop4lbbg4ccjdt0sju2lshayoij3 00:13:38.448 14:32:46 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:13:38.448 [2024-04-17 14:32:46.979816] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:38.448 [2024-04-17 14:32:46.979962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62649 ] 00:13:38.707 [2024-04-17 14:32:47.122930] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.707 [2024-04-17 14:32:47.183206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.966  Copying: 32/32 [B] (average 31 kBps) 00:13:38.966 00:13:38.966 14:32:47 -- dd/posix.sh@27 -- # [[ yxcmgop4lbbg4ccjdt0sju2lshayoij3schpvuae2hrmrdcgdr41okudb7rpwork == \y\x\c\m\g\o\p\4\l\b\b\g\4\c\c\j\d\t\0\s\j\u\2\l\s\h\a\y\o\i\j\3\s\c\h\p\v\u\a\e\2\h\r\m\r\d\c\g\d\r\4\1\o\k\u\d\b\7\r\p\w\o\r\k ]] 00:13:38.966 00:13:38.966 real 0m0.502s 00:13:38.966 user 0m0.297s 00:13:38.966 sys 0m0.181s 00:13:38.966 14:32:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:38.966 14:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:38.966 ************************************ 00:13:38.966 END TEST dd_flag_append 00:13:38.966 ************************************ 00:13:38.966 14:32:47 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:13:38.966 14:32:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:38.966 14:32:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.966 14:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:38.966 ************************************ 00:13:38.966 START TEST dd_flag_directory 00:13:38.966 ************************************ 00:13:38.966 14:32:47 -- common/autotest_common.sh@1111 -- # directory 00:13:38.966 14:32:47 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:38.966 14:32:47 -- common/autotest_common.sh@638 -- # local es=0 00:13:38.966 14:32:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:38.966 14:32:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:38.966 14:32:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.966 14:32:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:38.966 14:32:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.966 14:32:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:38.966 14:32:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.966 14:32:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:38.966 14:32:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:38.966 14:32:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:39.224 [2024-04-17 14:32:47.590406] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:39.225 [2024-04-17 14:32:47.590552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62681 ] 00:13:39.225 [2024-04-17 14:32:47.729560] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.225 [2024-04-17 14:32:47.814113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.484 [2024-04-17 14:32:47.874104] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:39.484 [2024-04-17 14:32:47.874182] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:39.484 [2024-04-17 14:32:47.874205] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:39.484 [2024-04-17 14:32:47.947070] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:39.484 14:32:48 -- common/autotest_common.sh@641 -- # es=236 00:13:39.484 14:32:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:39.484 14:32:48 -- common/autotest_common.sh@650 -- # es=108 00:13:39.484 14:32:48 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:39.484 14:32:48 -- common/autotest_common.sh@658 -- # es=1 00:13:39.484 14:32:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:39.484 14:32:48 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:39.484 14:32:48 -- common/autotest_common.sh@638 -- # local es=0 00:13:39.484 14:32:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:39.484 14:32:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.484 14:32:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.484 14:32:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.484 14:32:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.484 14:32:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.484 14:32:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.484 14:32:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.484 14:32:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:39.484 14:32:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:39.743 [2024-04-17 14:32:48.118133] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:39.743 [2024-04-17 14:32:48.118245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62691 ] 00:13:39.743 [2024-04-17 14:32:48.250922] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.743 [2024-04-17 14:32:48.313219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.001 [2024-04-17 14:32:48.360649] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:40.001 [2024-04-17 14:32:48.360707] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:40.001 [2024-04-17 14:32:48.360721] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:40.001 [2024-04-17 14:32:48.425626] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:40.001 14:32:48 -- common/autotest_common.sh@641 -- # es=236 00:13:40.001 14:32:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:40.001 14:32:48 -- common/autotest_common.sh@650 -- # es=108 00:13:40.001 14:32:48 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:40.001 14:32:48 -- common/autotest_common.sh@658 -- # es=1 00:13:40.001 14:32:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:40.001 00:13:40.001 real 0m1.006s 00:13:40.001 user 0m0.599s 00:13:40.001 sys 0m0.196s 00:13:40.001 14:32:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:40.001 14:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.001 ************************************ 00:13:40.001 END TEST dd_flag_directory 00:13:40.001 ************************************ 00:13:40.001 14:32:48 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:13:40.001 14:32:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:40.001 14:32:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:40.001 14:32:48 -- common/autotest_common.sh@10 -- # set +x 00:13:40.259 ************************************ 00:13:40.259 START TEST dd_flag_nofollow 00:13:40.259 ************************************ 00:13:40.259 14:32:48 -- common/autotest_common.sh@1111 -- # nofollow 00:13:40.259 14:32:48 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:40.259 14:32:48 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:40.259 14:32:48 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:40.259 14:32:48 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:40.259 14:32:48 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:40.259 14:32:48 -- common/autotest_common.sh@638 -- # local es=0 00:13:40.259 14:32:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:40.259 14:32:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.259 14:32:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.259 14:32:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.259 14:32:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.259 14:32:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.259 14:32:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.259 14:32:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.259 14:32:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:40.259 14:32:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:40.259 [2024-04-17 14:32:48.726573] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:40.259 [2024-04-17 14:32:48.726702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62723 ] 00:13:40.517 [2024-04-17 14:32:48.873218] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.517 [2024-04-17 14:32:48.931899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.517 [2024-04-17 14:32:48.979754] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:40.517 [2024-04-17 14:32:48.979815] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:40.517 [2024-04-17 14:32:48.979831] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:40.517 [2024-04-17 14:32:49.042540] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:40.776 14:32:49 -- common/autotest_common.sh@641 -- # es=216 00:13:40.776 14:32:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:40.776 14:32:49 -- common/autotest_common.sh@650 -- # es=88 00:13:40.776 14:32:49 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:40.776 14:32:49 -- common/autotest_common.sh@658 -- # es=1 00:13:40.776 14:32:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:40.776 14:32:49 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:40.776 14:32:49 -- common/autotest_common.sh@638 -- # local es=0 00:13:40.776 14:32:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:40.776 14:32:49 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.776 14:32:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.776 14:32:49 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.776 14:32:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.776 14:32:49 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.776 14:32:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.776 14:32:49 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.776 14:32:49 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:40.776 14:32:49 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:40.776 [2024-04-17 14:32:49.200307] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:40.776 [2024-04-17 14:32:49.200427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62733 ] 00:13:40.776 [2024-04-17 14:32:49.332120] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.066 [2024-04-17 14:32:49.389482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.066 [2024-04-17 14:32:49.436288] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:41.066 [2024-04-17 14:32:49.436347] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:41.066 [2024-04-17 14:32:49.436364] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:41.066 [2024-04-17 14:32:49.497415] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:41.066 14:32:49 -- common/autotest_common.sh@641 -- # es=216 00:13:41.066 14:32:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:41.066 14:32:49 -- common/autotest_common.sh@650 -- # es=88 00:13:41.066 14:32:49 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:41.066 14:32:49 -- common/autotest_common.sh@658 -- # es=1 00:13:41.066 14:32:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:41.066 14:32:49 -- dd/posix.sh@46 -- # gen_bytes 512 00:13:41.066 14:32:49 -- dd/common.sh@98 -- # xtrace_disable 00:13:41.066 14:32:49 -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 14:32:49 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:41.066 [2024-04-17 14:32:49.666002] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:41.066 [2024-04-17 14:32:49.666127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62740 ] 00:13:41.325 [2024-04-17 14:32:49.810090] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.325 [2024-04-17 14:32:49.867833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.584  Copying: 512/512 [B] (average 500 kBps) 00:13:41.584 00:13:41.584 14:32:50 -- dd/posix.sh@49 -- # [[ r7rvz89bhhucp9j96ibh0xqb24s5mdlwtl2nu3d36d4x9oe2wt6l11r4woo7zg2po82l56w8xne6tdtfkwnnpu9jdsj5k9wd06q3tllvpyi7xjz92qtv2l575ugml83483pv28y0tvm5zez85jvx7gg81quwbl79lh24pi0cjazh71pp92m5ed97sesxnurgnyrmg4b0q69149wc0l8kf8o9dvzpush0stmogds9jlkipwcwgorp47upaj0sz0luactled4ek647p6em65cejhtghqyy4018ix52cdmxiyi9p272k6zm2oz0p8pmfq3qj7reztq4lbcxk2kupvd3w6recppd7o6j22lfsbh9bpd6xo9wmuulns48kfg7ib8nad9uooyw2fbum0ran6hcvziyay0h3e73g63ewvipye3epdzajozmpphkf83g435vw4qhy5wj1j9uieh0vg1bdfmhbiug0i4d2u7buu678b24azcv69bwbtlt8490ot0q == \r\7\r\v\z\8\9\b\h\h\u\c\p\9\j\9\6\i\b\h\0\x\q\b\2\4\s\5\m\d\l\w\t\l\2\n\u\3\d\3\6\d\4\x\9\o\e\2\w\t\6\l\1\1\r\4\w\o\o\7\z\g\2\p\o\8\2\l\5\6\w\8\x\n\e\6\t\d\t\f\k\w\n\n\p\u\9\j\d\s\j\5\k\9\w\d\0\6\q\3\t\l\l\v\p\y\i\7\x\j\z\9\2\q\t\v\2\l\5\7\5\u\g\m\l\8\3\4\8\3\p\v\2\8\y\0\t\v\m\5\z\e\z\8\5\j\v\x\7\g\g\8\1\q\u\w\b\l\7\9\l\h\2\4\p\i\0\c\j\a\z\h\7\1\p\p\9\2\m\5\e\d\9\7\s\e\s\x\n\u\r\g\n\y\r\m\g\4\b\0\q\6\9\1\4\9\w\c\0\l\8\k\f\8\o\9\d\v\z\p\u\s\h\0\s\t\m\o\g\d\s\9\j\l\k\i\p\w\c\w\g\o\r\p\4\7\u\p\a\j\0\s\z\0\l\u\a\c\t\l\e\d\4\e\k\6\4\7\p\6\e\m\6\5\c\e\j\h\t\g\h\q\y\y\4\0\1\8\i\x\5\2\c\d\m\x\i\y\i\9\p\2\7\2\k\6\z\m\2\o\z\0\p\8\p\m\f\q\3\q\j\7\r\e\z\t\q\4\l\b\c\x\k\2\k\u\p\v\d\3\w\6\r\e\c\p\p\d\7\o\6\j\2\2\l\f\s\b\h\9\b\p\d\6\x\o\9\w\m\u\u\l\n\s\4\8\k\f\g\7\i\b\8\n\a\d\9\u\o\o\y\w\2\f\b\u\m\0\r\a\n\6\h\c\v\z\i\y\a\y\0\h\3\e\7\3\g\6\3\e\w\v\i\p\y\e\3\e\p\d\z\a\j\o\z\m\p\p\h\k\f\8\3\g\4\3\5\v\w\4\q\h\y\5\w\j\1\j\9\u\i\e\h\0\v\g\1\b\d\f\m\h\b\i\u\g\0\i\4\d\2\u\7\b\u\u\6\7\8\b\2\4\a\z\c\v\6\9\b\w\b\t\l\t\8\4\9\0\o\t\0\q ]] 00:13:41.584 00:13:41.584 real 0m1.444s 00:13:41.584 user 0m0.829s 00:13:41.584 sys 0m0.382s 00:13:41.584 14:32:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:41.584 ************************************ 00:13:41.584 14:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:41.584 END TEST dd_flag_nofollow 00:13:41.584 ************************************ 00:13:41.584 14:32:50 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:13:41.584 14:32:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:41.584 14:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.584 14:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:41.843 ************************************ 00:13:41.843 START TEST dd_flag_noatime 00:13:41.843 ************************************ 00:13:41.843 14:32:50 -- common/autotest_common.sh@1111 -- # noatime 00:13:41.843 14:32:50 -- dd/posix.sh@53 -- # local atime_if 00:13:41.843 14:32:50 -- dd/posix.sh@54 -- # local atime_of 00:13:41.843 14:32:50 -- dd/posix.sh@58 -- # gen_bytes 512 00:13:41.843 14:32:50 -- dd/common.sh@98 -- # xtrace_disable 00:13:41.843 14:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:41.843 14:32:50 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:41.843 14:32:50 -- dd/posix.sh@60 -- # atime_if=1713364369 00:13:41.843 14:32:50 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:41.843 14:32:50 -- dd/posix.sh@61 -- # atime_of=1713364370 00:13:41.843 14:32:50 -- dd/posix.sh@66 -- # sleep 1 00:13:42.780 14:32:51 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:42.780 [2024-04-17 14:32:51.269382] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:42.780 [2024-04-17 14:32:51.269470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62787 ] 00:13:43.039 [2024-04-17 14:32:51.407153] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.039 [2024-04-17 14:32:51.475274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.297  Copying: 512/512 [B] (average 500 kBps) 00:13:43.297 00:13:43.297 14:32:51 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:43.297 14:32:51 -- dd/posix.sh@69 -- # (( atime_if == 1713364369 )) 00:13:43.297 14:32:51 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:43.297 14:32:51 -- dd/posix.sh@70 -- # (( atime_of == 1713364370 )) 00:13:43.297 14:32:51 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:43.297 [2024-04-17 14:32:51.760167] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:43.297 [2024-04-17 14:32:51.760261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62800 ] 00:13:43.297 [2024-04-17 14:32:51.892804] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.555 [2024-04-17 14:32:51.951329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.813  Copying: 512/512 [B] (average 500 kBps) 00:13:43.813 00:13:43.813 14:32:52 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:43.813 14:32:52 -- dd/posix.sh@73 -- # (( atime_if < 1713364371 )) 00:13:43.813 00:13:43.813 real 0m1.993s 00:13:43.813 user 0m0.575s 00:13:43.813 sys 0m0.366s 00:13:43.813 14:32:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:43.813 ************************************ 00:13:43.813 END TEST dd_flag_noatime 00:13:43.813 ************************************ 00:13:43.813 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:43.813 14:32:52 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:13:43.813 14:32:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:43.813 14:32:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.813 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:43.813 ************************************ 00:13:43.813 START TEST dd_flags_misc 00:13:43.813 ************************************ 00:13:43.813 14:32:52 -- common/autotest_common.sh@1111 -- # io 00:13:43.813 14:32:52 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:13:43.813 14:32:52 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:13:43.813 14:32:52 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:13:43.813 14:32:52 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:43.813 14:32:52 -- dd/posix.sh@86 -- # gen_bytes 512 00:13:43.813 14:32:52 -- dd/common.sh@98 -- # xtrace_disable 00:13:43.813 14:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:43.813 14:32:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:43.813 14:32:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:43.813 [2024-04-17 14:32:52.363810] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:43.813 [2024-04-17 14:32:52.363913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62833 ] 00:13:44.071 [2024-04-17 14:32:52.496429] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.071 [2024-04-17 14:32:52.555356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.329  Copying: 512/512 [B] (average 500 kBps) 00:13:44.329 00:13:44.330 14:32:52 -- dd/posix.sh@93 -- # [[ upz9tdtjgvqv6egwqc8a8cjcbclfstkdzmhnve89flhhele3mrnxd5y67ng3tuxplku6wfkeollh1qmi4tiz3s291aoqw5px12htz5dv8w03jx99okuyln5a9w3jylolledyjq0o96flqacldv5g6l083rz4cedbzapc0htu8vaf6bknlj7bxpwv3kzrrbhyx2dsxi8qtgkwerzrvlogef4v51ishkiiwly3wave12yn7uao6fdnqg3wdt46x01b1js10cc7rx3d864p3n853zuy02qz7vb7c7wt1thszlhobm194nso4mgg0tf3zkum34giq7rlb3ti0pr5881zda4b0yeuter5xeivtt73c6ft4xukb8ocgyb78b5rsyy36l0bopbraglmnkahselxtp7g4xv5dsr8zu2ja2iqixhlihrtd3313cnlblzse9f9ialogf9rv7hgo10ufpv8fb7ztmrfovgy59ld8pmn5x2j51wgf5c6x4gexyuzle2i == \u\p\z\9\t\d\t\j\g\v\q\v\6\e\g\w\q\c\8\a\8\c\j\c\b\c\l\f\s\t\k\d\z\m\h\n\v\e\8\9\f\l\h\h\e\l\e\3\m\r\n\x\d\5\y\6\7\n\g\3\t\u\x\p\l\k\u\6\w\f\k\e\o\l\l\h\1\q\m\i\4\t\i\z\3\s\2\9\1\a\o\q\w\5\p\x\1\2\h\t\z\5\d\v\8\w\0\3\j\x\9\9\o\k\u\y\l\n\5\a\9\w\3\j\y\l\o\l\l\e\d\y\j\q\0\o\9\6\f\l\q\a\c\l\d\v\5\g\6\l\0\8\3\r\z\4\c\e\d\b\z\a\p\c\0\h\t\u\8\v\a\f\6\b\k\n\l\j\7\b\x\p\w\v\3\k\z\r\r\b\h\y\x\2\d\s\x\i\8\q\t\g\k\w\e\r\z\r\v\l\o\g\e\f\4\v\5\1\i\s\h\k\i\i\w\l\y\3\w\a\v\e\1\2\y\n\7\u\a\o\6\f\d\n\q\g\3\w\d\t\4\6\x\0\1\b\1\j\s\1\0\c\c\7\r\x\3\d\8\6\4\p\3\n\8\5\3\z\u\y\0\2\q\z\7\v\b\7\c\7\w\t\1\t\h\s\z\l\h\o\b\m\1\9\4\n\s\o\4\m\g\g\0\t\f\3\z\k\u\m\3\4\g\i\q\7\r\l\b\3\t\i\0\p\r\5\8\8\1\z\d\a\4\b\0\y\e\u\t\e\r\5\x\e\i\v\t\t\7\3\c\6\f\t\4\x\u\k\b\8\o\c\g\y\b\7\8\b\5\r\s\y\y\3\6\l\0\b\o\p\b\r\a\g\l\m\n\k\a\h\s\e\l\x\t\p\7\g\4\x\v\5\d\s\r\8\z\u\2\j\a\2\i\q\i\x\h\l\i\h\r\t\d\3\3\1\3\c\n\l\b\l\z\s\e\9\f\9\i\a\l\o\g\f\9\r\v\7\h\g\o\1\0\u\f\p\v\8\f\b\7\z\t\m\r\f\o\v\g\y\5\9\l\d\8\p\m\n\5\x\2\j\5\1\w\g\f\5\c\6\x\4\g\e\x\y\u\z\l\e\2\i ]] 00:13:44.330 14:32:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:44.330 14:32:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:44.330 [2024-04-17 14:32:52.831050] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:44.330 [2024-04-17 14:32:52.831139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62842 ] 00:13:44.588 [2024-04-17 14:32:52.965524] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.588 [2024-04-17 14:32:53.023854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.846  Copying: 512/512 [B] (average 500 kBps) 00:13:44.846 00:13:44.846 14:32:53 -- dd/posix.sh@93 -- # [[ upz9tdtjgvqv6egwqc8a8cjcbclfstkdzmhnve89flhhele3mrnxd5y67ng3tuxplku6wfkeollh1qmi4tiz3s291aoqw5px12htz5dv8w03jx99okuyln5a9w3jylolledyjq0o96flqacldv5g6l083rz4cedbzapc0htu8vaf6bknlj7bxpwv3kzrrbhyx2dsxi8qtgkwerzrvlogef4v51ishkiiwly3wave12yn7uao6fdnqg3wdt46x01b1js10cc7rx3d864p3n853zuy02qz7vb7c7wt1thszlhobm194nso4mgg0tf3zkum34giq7rlb3ti0pr5881zda4b0yeuter5xeivtt73c6ft4xukb8ocgyb78b5rsyy36l0bopbraglmnkahselxtp7g4xv5dsr8zu2ja2iqixhlihrtd3313cnlblzse9f9ialogf9rv7hgo10ufpv8fb7ztmrfovgy59ld8pmn5x2j51wgf5c6x4gexyuzle2i == \u\p\z\9\t\d\t\j\g\v\q\v\6\e\g\w\q\c\8\a\8\c\j\c\b\c\l\f\s\t\k\d\z\m\h\n\v\e\8\9\f\l\h\h\e\l\e\3\m\r\n\x\d\5\y\6\7\n\g\3\t\u\x\p\l\k\u\6\w\f\k\e\o\l\l\h\1\q\m\i\4\t\i\z\3\s\2\9\1\a\o\q\w\5\p\x\1\2\h\t\z\5\d\v\8\w\0\3\j\x\9\9\o\k\u\y\l\n\5\a\9\w\3\j\y\l\o\l\l\e\d\y\j\q\0\o\9\6\f\l\q\a\c\l\d\v\5\g\6\l\0\8\3\r\z\4\c\e\d\b\z\a\p\c\0\h\t\u\8\v\a\f\6\b\k\n\l\j\7\b\x\p\w\v\3\k\z\r\r\b\h\y\x\2\d\s\x\i\8\q\t\g\k\w\e\r\z\r\v\l\o\g\e\f\4\v\5\1\i\s\h\k\i\i\w\l\y\3\w\a\v\e\1\2\y\n\7\u\a\o\6\f\d\n\q\g\3\w\d\t\4\6\x\0\1\b\1\j\s\1\0\c\c\7\r\x\3\d\8\6\4\p\3\n\8\5\3\z\u\y\0\2\q\z\7\v\b\7\c\7\w\t\1\t\h\s\z\l\h\o\b\m\1\9\4\n\s\o\4\m\g\g\0\t\f\3\z\k\u\m\3\4\g\i\q\7\r\l\b\3\t\i\0\p\r\5\8\8\1\z\d\a\4\b\0\y\e\u\t\e\r\5\x\e\i\v\t\t\7\3\c\6\f\t\4\x\u\k\b\8\o\c\g\y\b\7\8\b\5\r\s\y\y\3\6\l\0\b\o\p\b\r\a\g\l\m\n\k\a\h\s\e\l\x\t\p\7\g\4\x\v\5\d\s\r\8\z\u\2\j\a\2\i\q\i\x\h\l\i\h\r\t\d\3\3\1\3\c\n\l\b\l\z\s\e\9\f\9\i\a\l\o\g\f\9\r\v\7\h\g\o\1\0\u\f\p\v\8\f\b\7\z\t\m\r\f\o\v\g\y\5\9\l\d\8\p\m\n\5\x\2\j\5\1\w\g\f\5\c\6\x\4\g\e\x\y\u\z\l\e\2\i ]] 00:13:44.846 14:32:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:44.846 14:32:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:44.846 [2024-04-17 14:32:53.291192] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:44.846 [2024-04-17 14:32:53.291285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62852 ] 00:13:44.846 [2024-04-17 14:32:53.422618] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.118 [2024-04-17 14:32:53.503283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.388  Copying: 512/512 [B] (average 166 kBps) 00:13:45.388 00:13:45.388 14:32:53 -- dd/posix.sh@93 -- # [[ upz9tdtjgvqv6egwqc8a8cjcbclfstkdzmhnve89flhhele3mrnxd5y67ng3tuxplku6wfkeollh1qmi4tiz3s291aoqw5px12htz5dv8w03jx99okuyln5a9w3jylolledyjq0o96flqacldv5g6l083rz4cedbzapc0htu8vaf6bknlj7bxpwv3kzrrbhyx2dsxi8qtgkwerzrvlogef4v51ishkiiwly3wave12yn7uao6fdnqg3wdt46x01b1js10cc7rx3d864p3n853zuy02qz7vb7c7wt1thszlhobm194nso4mgg0tf3zkum34giq7rlb3ti0pr5881zda4b0yeuter5xeivtt73c6ft4xukb8ocgyb78b5rsyy36l0bopbraglmnkahselxtp7g4xv5dsr8zu2ja2iqixhlihrtd3313cnlblzse9f9ialogf9rv7hgo10ufpv8fb7ztmrfovgy59ld8pmn5x2j51wgf5c6x4gexyuzle2i == \u\p\z\9\t\d\t\j\g\v\q\v\6\e\g\w\q\c\8\a\8\c\j\c\b\c\l\f\s\t\k\d\z\m\h\n\v\e\8\9\f\l\h\h\e\l\e\3\m\r\n\x\d\5\y\6\7\n\g\3\t\u\x\p\l\k\u\6\w\f\k\e\o\l\l\h\1\q\m\i\4\t\i\z\3\s\2\9\1\a\o\q\w\5\p\x\1\2\h\t\z\5\d\v\8\w\0\3\j\x\9\9\o\k\u\y\l\n\5\a\9\w\3\j\y\l\o\l\l\e\d\y\j\q\0\o\9\6\f\l\q\a\c\l\d\v\5\g\6\l\0\8\3\r\z\4\c\e\d\b\z\a\p\c\0\h\t\u\8\v\a\f\6\b\k\n\l\j\7\b\x\p\w\v\3\k\z\r\r\b\h\y\x\2\d\s\x\i\8\q\t\g\k\w\e\r\z\r\v\l\o\g\e\f\4\v\5\1\i\s\h\k\i\i\w\l\y\3\w\a\v\e\1\2\y\n\7\u\a\o\6\f\d\n\q\g\3\w\d\t\4\6\x\0\1\b\1\j\s\1\0\c\c\7\r\x\3\d\8\6\4\p\3\n\8\5\3\z\u\y\0\2\q\z\7\v\b\7\c\7\w\t\1\t\h\s\z\l\h\o\b\m\1\9\4\n\s\o\4\m\g\g\0\t\f\3\z\k\u\m\3\4\g\i\q\7\r\l\b\3\t\i\0\p\r\5\8\8\1\z\d\a\4\b\0\y\e\u\t\e\r\5\x\e\i\v\t\t\7\3\c\6\f\t\4\x\u\k\b\8\o\c\g\y\b\7\8\b\5\r\s\y\y\3\6\l\0\b\o\p\b\r\a\g\l\m\n\k\a\h\s\e\l\x\t\p\7\g\4\x\v\5\d\s\r\8\z\u\2\j\a\2\i\q\i\x\h\l\i\h\r\t\d\3\3\1\3\c\n\l\b\l\z\s\e\9\f\9\i\a\l\o\g\f\9\r\v\7\h\g\o\1\0\u\f\p\v\8\f\b\7\z\t\m\r\f\o\v\g\y\5\9\l\d\8\p\m\n\5\x\2\j\5\1\w\g\f\5\c\6\x\4\g\e\x\y\u\z\l\e\2\i ]] 00:13:45.388 14:32:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:45.388 14:32:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:13:45.388 [2024-04-17 14:32:53.797678] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:45.388 [2024-04-17 14:32:53.797807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62861 ] 00:13:45.388 [2024-04-17 14:32:53.936632] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.647 [2024-04-17 14:32:53.994574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.647  Copying: 512/512 [B] (average 250 kBps) 00:13:45.647 00:13:45.647 14:32:54 -- dd/posix.sh@93 -- # [[ upz9tdtjgvqv6egwqc8a8cjcbclfstkdzmhnve89flhhele3mrnxd5y67ng3tuxplku6wfkeollh1qmi4tiz3s291aoqw5px12htz5dv8w03jx99okuyln5a9w3jylolledyjq0o96flqacldv5g6l083rz4cedbzapc0htu8vaf6bknlj7bxpwv3kzrrbhyx2dsxi8qtgkwerzrvlogef4v51ishkiiwly3wave12yn7uao6fdnqg3wdt46x01b1js10cc7rx3d864p3n853zuy02qz7vb7c7wt1thszlhobm194nso4mgg0tf3zkum34giq7rlb3ti0pr5881zda4b0yeuter5xeivtt73c6ft4xukb8ocgyb78b5rsyy36l0bopbraglmnkahselxtp7g4xv5dsr8zu2ja2iqixhlihrtd3313cnlblzse9f9ialogf9rv7hgo10ufpv8fb7ztmrfovgy59ld8pmn5x2j51wgf5c6x4gexyuzle2i == \u\p\z\9\t\d\t\j\g\v\q\v\6\e\g\w\q\c\8\a\8\c\j\c\b\c\l\f\s\t\k\d\z\m\h\n\v\e\8\9\f\l\h\h\e\l\e\3\m\r\n\x\d\5\y\6\7\n\g\3\t\u\x\p\l\k\u\6\w\f\k\e\o\l\l\h\1\q\m\i\4\t\i\z\3\s\2\9\1\a\o\q\w\5\p\x\1\2\h\t\z\5\d\v\8\w\0\3\j\x\9\9\o\k\u\y\l\n\5\a\9\w\3\j\y\l\o\l\l\e\d\y\j\q\0\o\9\6\f\l\q\a\c\l\d\v\5\g\6\l\0\8\3\r\z\4\c\e\d\b\z\a\p\c\0\h\t\u\8\v\a\f\6\b\k\n\l\j\7\b\x\p\w\v\3\k\z\r\r\b\h\y\x\2\d\s\x\i\8\q\t\g\k\w\e\r\z\r\v\l\o\g\e\f\4\v\5\1\i\s\h\k\i\i\w\l\y\3\w\a\v\e\1\2\y\n\7\u\a\o\6\f\d\n\q\g\3\w\d\t\4\6\x\0\1\b\1\j\s\1\0\c\c\7\r\x\3\d\8\6\4\p\3\n\8\5\3\z\u\y\0\2\q\z\7\v\b\7\c\7\w\t\1\t\h\s\z\l\h\o\b\m\1\9\4\n\s\o\4\m\g\g\0\t\f\3\z\k\u\m\3\4\g\i\q\7\r\l\b\3\t\i\0\p\r\5\8\8\1\z\d\a\4\b\0\y\e\u\t\e\r\5\x\e\i\v\t\t\7\3\c\6\f\t\4\x\u\k\b\8\o\c\g\y\b\7\8\b\5\r\s\y\y\3\6\l\0\b\o\p\b\r\a\g\l\m\n\k\a\h\s\e\l\x\t\p\7\g\4\x\v\5\d\s\r\8\z\u\2\j\a\2\i\q\i\x\h\l\i\h\r\t\d\3\3\1\3\c\n\l\b\l\z\s\e\9\f\9\i\a\l\o\g\f\9\r\v\7\h\g\o\1\0\u\f\p\v\8\f\b\7\z\t\m\r\f\o\v\g\y\5\9\l\d\8\p\m\n\5\x\2\j\5\1\w\g\f\5\c\6\x\4\g\e\x\y\u\z\l\e\2\i ]] 00:13:45.647 14:32:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:45.647 14:32:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:13:45.647 14:32:54 -- dd/common.sh@98 -- # xtrace_disable 00:13:45.647 14:32:54 -- common/autotest_common.sh@10 -- # set +x 00:13:45.647 14:32:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:45.647 14:32:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:45.906 [2024-04-17 14:32:54.282052] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:45.906 [2024-04-17 14:32:54.282187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62871 ] 00:13:45.906 [2024-04-17 14:32:54.421092] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.906 [2024-04-17 14:32:54.504897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.164  Copying: 512/512 [B] (average 500 kBps) 00:13:46.164 00:13:46.164 14:32:54 -- dd/posix.sh@93 -- # [[ fsvpb5yunuuc5my9y5tdaodcqic3em9h1lkmhyt4rew1bye2k6ug7zrjaqid0kp48vh1t97w9tw9ksg9he8mz7918k0nbl8op22zffwgphics7b07jsijy2185m180mu89adaxp2gfw0ea70pogh6p9tjnminw7boxib3aoo8ys5uegwb21u1goh6c1ggxf7mjp9fmbtp5m4vhn0mnczm29uq7hu00tcodph2ofqguz8vborq9gdue1eibok107hlegvh5mmwe68vnn1h9imapo1vh3yspp97gpqm8pg5qnefa7aay3xh54ngajtbygkla0064w5z3fopzejs60smwsf0sixkb3h3ltiva0j8bkgc6z9iqpq7tcbya6zlevbgaplyo3mre894j0ik2pton82ts59kylyo6etkbx1lh08y6a18ow5bu3gqtjz5zcronpibi7tn6yp55x8f2ha79ojzdxhkm2oa3b4xapplasfsske0xp8oq26fgvkejps == \f\s\v\p\b\5\y\u\n\u\u\c\5\m\y\9\y\5\t\d\a\o\d\c\q\i\c\3\e\m\9\h\1\l\k\m\h\y\t\4\r\e\w\1\b\y\e\2\k\6\u\g\7\z\r\j\a\q\i\d\0\k\p\4\8\v\h\1\t\9\7\w\9\t\w\9\k\s\g\9\h\e\8\m\z\7\9\1\8\k\0\n\b\l\8\o\p\2\2\z\f\f\w\g\p\h\i\c\s\7\b\0\7\j\s\i\j\y\2\1\8\5\m\1\8\0\m\u\8\9\a\d\a\x\p\2\g\f\w\0\e\a\7\0\p\o\g\h\6\p\9\t\j\n\m\i\n\w\7\b\o\x\i\b\3\a\o\o\8\y\s\5\u\e\g\w\b\2\1\u\1\g\o\h\6\c\1\g\g\x\f\7\m\j\p\9\f\m\b\t\p\5\m\4\v\h\n\0\m\n\c\z\m\2\9\u\q\7\h\u\0\0\t\c\o\d\p\h\2\o\f\q\g\u\z\8\v\b\o\r\q\9\g\d\u\e\1\e\i\b\o\k\1\0\7\h\l\e\g\v\h\5\m\m\w\e\6\8\v\n\n\1\h\9\i\m\a\p\o\1\v\h\3\y\s\p\p\9\7\g\p\q\m\8\p\g\5\q\n\e\f\a\7\a\a\y\3\x\h\5\4\n\g\a\j\t\b\y\g\k\l\a\0\0\6\4\w\5\z\3\f\o\p\z\e\j\s\6\0\s\m\w\s\f\0\s\i\x\k\b\3\h\3\l\t\i\v\a\0\j\8\b\k\g\c\6\z\9\i\q\p\q\7\t\c\b\y\a\6\z\l\e\v\b\g\a\p\l\y\o\3\m\r\e\8\9\4\j\0\i\k\2\p\t\o\n\8\2\t\s\5\9\k\y\l\y\o\6\e\t\k\b\x\1\l\h\0\8\y\6\a\1\8\o\w\5\b\u\3\g\q\t\j\z\5\z\c\r\o\n\p\i\b\i\7\t\n\6\y\p\5\5\x\8\f\2\h\a\7\9\o\j\z\d\x\h\k\m\2\o\a\3\b\4\x\a\p\p\l\a\s\f\s\s\k\e\0\x\p\8\o\q\2\6\f\g\v\k\e\j\p\s ]] 00:13:46.164 14:32:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:46.164 14:32:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:46.423 [2024-04-17 14:32:54.789606] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:46.423 [2024-04-17 14:32:54.789748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62880 ] 00:13:46.423 [2024-04-17 14:32:54.924567] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.423 [2024-04-17 14:32:54.996912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.681  Copying: 512/512 [B] (average 500 kBps) 00:13:46.681 00:13:46.681 14:32:55 -- dd/posix.sh@93 -- # [[ fsvpb5yunuuc5my9y5tdaodcqic3em9h1lkmhyt4rew1bye2k6ug7zrjaqid0kp48vh1t97w9tw9ksg9he8mz7918k0nbl8op22zffwgphics7b07jsijy2185m180mu89adaxp2gfw0ea70pogh6p9tjnminw7boxib3aoo8ys5uegwb21u1goh6c1ggxf7mjp9fmbtp5m4vhn0mnczm29uq7hu00tcodph2ofqguz8vborq9gdue1eibok107hlegvh5mmwe68vnn1h9imapo1vh3yspp97gpqm8pg5qnefa7aay3xh54ngajtbygkla0064w5z3fopzejs60smwsf0sixkb3h3ltiva0j8bkgc6z9iqpq7tcbya6zlevbgaplyo3mre894j0ik2pton82ts59kylyo6etkbx1lh08y6a18ow5bu3gqtjz5zcronpibi7tn6yp55x8f2ha79ojzdxhkm2oa3b4xapplasfsske0xp8oq26fgvkejps == \f\s\v\p\b\5\y\u\n\u\u\c\5\m\y\9\y\5\t\d\a\o\d\c\q\i\c\3\e\m\9\h\1\l\k\m\h\y\t\4\r\e\w\1\b\y\e\2\k\6\u\g\7\z\r\j\a\q\i\d\0\k\p\4\8\v\h\1\t\9\7\w\9\t\w\9\k\s\g\9\h\e\8\m\z\7\9\1\8\k\0\n\b\l\8\o\p\2\2\z\f\f\w\g\p\h\i\c\s\7\b\0\7\j\s\i\j\y\2\1\8\5\m\1\8\0\m\u\8\9\a\d\a\x\p\2\g\f\w\0\e\a\7\0\p\o\g\h\6\p\9\t\j\n\m\i\n\w\7\b\o\x\i\b\3\a\o\o\8\y\s\5\u\e\g\w\b\2\1\u\1\g\o\h\6\c\1\g\g\x\f\7\m\j\p\9\f\m\b\t\p\5\m\4\v\h\n\0\m\n\c\z\m\2\9\u\q\7\h\u\0\0\t\c\o\d\p\h\2\o\f\q\g\u\z\8\v\b\o\r\q\9\g\d\u\e\1\e\i\b\o\k\1\0\7\h\l\e\g\v\h\5\m\m\w\e\6\8\v\n\n\1\h\9\i\m\a\p\o\1\v\h\3\y\s\p\p\9\7\g\p\q\m\8\p\g\5\q\n\e\f\a\7\a\a\y\3\x\h\5\4\n\g\a\j\t\b\y\g\k\l\a\0\0\6\4\w\5\z\3\f\o\p\z\e\j\s\6\0\s\m\w\s\f\0\s\i\x\k\b\3\h\3\l\t\i\v\a\0\j\8\b\k\g\c\6\z\9\i\q\p\q\7\t\c\b\y\a\6\z\l\e\v\b\g\a\p\l\y\o\3\m\r\e\8\9\4\j\0\i\k\2\p\t\o\n\8\2\t\s\5\9\k\y\l\y\o\6\e\t\k\b\x\1\l\h\0\8\y\6\a\1\8\o\w\5\b\u\3\g\q\t\j\z\5\z\c\r\o\n\p\i\b\i\7\t\n\6\y\p\5\5\x\8\f\2\h\a\7\9\o\j\z\d\x\h\k\m\2\o\a\3\b\4\x\a\p\p\l\a\s\f\s\s\k\e\0\x\p\8\o\q\2\6\f\g\v\k\e\j\p\s ]] 00:13:46.681 14:32:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:46.681 14:32:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:46.681 [2024-04-17 14:32:55.269357] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:46.681 [2024-04-17 14:32:55.269448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62890 ] 00:13:46.940 [2024-04-17 14:32:55.399328] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.940 [2024-04-17 14:32:55.457686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.198  Copying: 512/512 [B] (average 166 kBps) 00:13:47.198 00:13:47.198 14:32:55 -- dd/posix.sh@93 -- # [[ fsvpb5yunuuc5my9y5tdaodcqic3em9h1lkmhyt4rew1bye2k6ug7zrjaqid0kp48vh1t97w9tw9ksg9he8mz7918k0nbl8op22zffwgphics7b07jsijy2185m180mu89adaxp2gfw0ea70pogh6p9tjnminw7boxib3aoo8ys5uegwb21u1goh6c1ggxf7mjp9fmbtp5m4vhn0mnczm29uq7hu00tcodph2ofqguz8vborq9gdue1eibok107hlegvh5mmwe68vnn1h9imapo1vh3yspp97gpqm8pg5qnefa7aay3xh54ngajtbygkla0064w5z3fopzejs60smwsf0sixkb3h3ltiva0j8bkgc6z9iqpq7tcbya6zlevbgaplyo3mre894j0ik2pton82ts59kylyo6etkbx1lh08y6a18ow5bu3gqtjz5zcronpibi7tn6yp55x8f2ha79ojzdxhkm2oa3b4xapplasfsske0xp8oq26fgvkejps == \f\s\v\p\b\5\y\u\n\u\u\c\5\m\y\9\y\5\t\d\a\o\d\c\q\i\c\3\e\m\9\h\1\l\k\m\h\y\t\4\r\e\w\1\b\y\e\2\k\6\u\g\7\z\r\j\a\q\i\d\0\k\p\4\8\v\h\1\t\9\7\w\9\t\w\9\k\s\g\9\h\e\8\m\z\7\9\1\8\k\0\n\b\l\8\o\p\2\2\z\f\f\w\g\p\h\i\c\s\7\b\0\7\j\s\i\j\y\2\1\8\5\m\1\8\0\m\u\8\9\a\d\a\x\p\2\g\f\w\0\e\a\7\0\p\o\g\h\6\p\9\t\j\n\m\i\n\w\7\b\o\x\i\b\3\a\o\o\8\y\s\5\u\e\g\w\b\2\1\u\1\g\o\h\6\c\1\g\g\x\f\7\m\j\p\9\f\m\b\t\p\5\m\4\v\h\n\0\m\n\c\z\m\2\9\u\q\7\h\u\0\0\t\c\o\d\p\h\2\o\f\q\g\u\z\8\v\b\o\r\q\9\g\d\u\e\1\e\i\b\o\k\1\0\7\h\l\e\g\v\h\5\m\m\w\e\6\8\v\n\n\1\h\9\i\m\a\p\o\1\v\h\3\y\s\p\p\9\7\g\p\q\m\8\p\g\5\q\n\e\f\a\7\a\a\y\3\x\h\5\4\n\g\a\j\t\b\y\g\k\l\a\0\0\6\4\w\5\z\3\f\o\p\z\e\j\s\6\0\s\m\w\s\f\0\s\i\x\k\b\3\h\3\l\t\i\v\a\0\j\8\b\k\g\c\6\z\9\i\q\p\q\7\t\c\b\y\a\6\z\l\e\v\b\g\a\p\l\y\o\3\m\r\e\8\9\4\j\0\i\k\2\p\t\o\n\8\2\t\s\5\9\k\y\l\y\o\6\e\t\k\b\x\1\l\h\0\8\y\6\a\1\8\o\w\5\b\u\3\g\q\t\j\z\5\z\c\r\o\n\p\i\b\i\7\t\n\6\y\p\5\5\x\8\f\2\h\a\7\9\o\j\z\d\x\h\k\m\2\o\a\3\b\4\x\a\p\p\l\a\s\f\s\s\k\e\0\x\p\8\o\q\2\6\f\g\v\k\e\j\p\s ]] 00:13:47.198 14:32:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:47.198 14:32:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:13:47.198 [2024-04-17 14:32:55.725203] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:47.198 [2024-04-17 14:32:55.725288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62898 ] 00:13:47.457 [2024-04-17 14:32:55.854195] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.457 [2024-04-17 14:32:55.911148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.716  Copying: 512/512 [B] (average 500 kBps) 00:13:47.716 00:13:47.716 14:32:56 -- dd/posix.sh@93 -- # [[ fsvpb5yunuuc5my9y5tdaodcqic3em9h1lkmhyt4rew1bye2k6ug7zrjaqid0kp48vh1t97w9tw9ksg9he8mz7918k0nbl8op22zffwgphics7b07jsijy2185m180mu89adaxp2gfw0ea70pogh6p9tjnminw7boxib3aoo8ys5uegwb21u1goh6c1ggxf7mjp9fmbtp5m4vhn0mnczm29uq7hu00tcodph2ofqguz8vborq9gdue1eibok107hlegvh5mmwe68vnn1h9imapo1vh3yspp97gpqm8pg5qnefa7aay3xh54ngajtbygkla0064w5z3fopzejs60smwsf0sixkb3h3ltiva0j8bkgc6z9iqpq7tcbya6zlevbgaplyo3mre894j0ik2pton82ts59kylyo6etkbx1lh08y6a18ow5bu3gqtjz5zcronpibi7tn6yp55x8f2ha79ojzdxhkm2oa3b4xapplasfsske0xp8oq26fgvkejps == \f\s\v\p\b\5\y\u\n\u\u\c\5\m\y\9\y\5\t\d\a\o\d\c\q\i\c\3\e\m\9\h\1\l\k\m\h\y\t\4\r\e\w\1\b\y\e\2\k\6\u\g\7\z\r\j\a\q\i\d\0\k\p\4\8\v\h\1\t\9\7\w\9\t\w\9\k\s\g\9\h\e\8\m\z\7\9\1\8\k\0\n\b\l\8\o\p\2\2\z\f\f\w\g\p\h\i\c\s\7\b\0\7\j\s\i\j\y\2\1\8\5\m\1\8\0\m\u\8\9\a\d\a\x\p\2\g\f\w\0\e\a\7\0\p\o\g\h\6\p\9\t\j\n\m\i\n\w\7\b\o\x\i\b\3\a\o\o\8\y\s\5\u\e\g\w\b\2\1\u\1\g\o\h\6\c\1\g\g\x\f\7\m\j\p\9\f\m\b\t\p\5\m\4\v\h\n\0\m\n\c\z\m\2\9\u\q\7\h\u\0\0\t\c\o\d\p\h\2\o\f\q\g\u\z\8\v\b\o\r\q\9\g\d\u\e\1\e\i\b\o\k\1\0\7\h\l\e\g\v\h\5\m\m\w\e\6\8\v\n\n\1\h\9\i\m\a\p\o\1\v\h\3\y\s\p\p\9\7\g\p\q\m\8\p\g\5\q\n\e\f\a\7\a\a\y\3\x\h\5\4\n\g\a\j\t\b\y\g\k\l\a\0\0\6\4\w\5\z\3\f\o\p\z\e\j\s\6\0\s\m\w\s\f\0\s\i\x\k\b\3\h\3\l\t\i\v\a\0\j\8\b\k\g\c\6\z\9\i\q\p\q\7\t\c\b\y\a\6\z\l\e\v\b\g\a\p\l\y\o\3\m\r\e\8\9\4\j\0\i\k\2\p\t\o\n\8\2\t\s\5\9\k\y\l\y\o\6\e\t\k\b\x\1\l\h\0\8\y\6\a\1\8\o\w\5\b\u\3\g\q\t\j\z\5\z\c\r\o\n\p\i\b\i\7\t\n\6\y\p\5\5\x\8\f\2\h\a\7\9\o\j\z\d\x\h\k\m\2\o\a\3\b\4\x\a\p\p\l\a\s\f\s\s\k\e\0\x\p\8\o\q\2\6\f\g\v\k\e\j\p\s ]] 00:13:47.716 00:13:47.716 real 0m3.820s 00:13:47.716 user 0m2.240s 00:13:47.716 sys 0m1.375s 00:13:47.716 14:32:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:47.716 14:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.716 ************************************ 00:13:47.716 END TEST dd_flags_misc 00:13:47.716 ************************************ 00:13:47.716 14:32:56 -- dd/posix.sh@131 -- # tests_forced_aio 00:13:47.716 14:32:56 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:13:47.716 * Second test run, disabling liburing, forcing AIO 00:13:47.716 14:32:56 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:13:47.716 14:32:56 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:13:47.716 14:32:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:47.716 14:32:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.716 14:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.716 ************************************ 00:13:47.716 START TEST dd_flag_append_forced_aio 00:13:47.716 ************************************ 00:13:47.716 14:32:56 -- common/autotest_common.sh@1111 -- # append 00:13:47.716 14:32:56 -- dd/posix.sh@16 -- # local dump0 00:13:47.716 14:32:56 -- dd/posix.sh@17 -- # local dump1 00:13:47.716 14:32:56 -- dd/posix.sh@19 -- # gen_bytes 32 00:13:47.716 14:32:56 -- dd/common.sh@98 -- # xtrace_disable 00:13:47.716 14:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.716 14:32:56 -- dd/posix.sh@19 -- # dump0=375311gy7r410az4wnt583nlmm1jy12f 00:13:47.716 14:32:56 -- dd/posix.sh@20 -- # gen_bytes 32 00:13:47.716 14:32:56 -- dd/common.sh@98 -- # xtrace_disable 00:13:47.716 14:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.716 14:32:56 -- dd/posix.sh@20 -- # dump1=drohwn9b4rlbyi46pj1jnr039snns2ba 00:13:47.716 14:32:56 -- dd/posix.sh@22 -- # printf %s 375311gy7r410az4wnt583nlmm1jy12f 00:13:47.716 14:32:56 -- dd/posix.sh@23 -- # printf %s drohwn9b4rlbyi46pj1jnr039snns2ba 00:13:47.716 14:32:56 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:13:47.716 [2024-04-17 14:32:56.295085] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:47.716 [2024-04-17 14:32:56.295176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62934 ] 00:13:47.976 [2024-04-17 14:32:56.428473] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.976 [2024-04-17 14:32:56.486041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.234  Copying: 32/32 [B] (average 31 kBps) 00:13:48.234 00:13:48.234 ************************************ 00:13:48.234 END TEST dd_flag_append_forced_aio 00:13:48.234 ************************************ 00:13:48.234 14:32:56 -- dd/posix.sh@27 -- # [[ drohwn9b4rlbyi46pj1jnr039snns2ba375311gy7r410az4wnt583nlmm1jy12f == \d\r\o\h\w\n\9\b\4\r\l\b\y\i\4\6\p\j\1\j\n\r\0\3\9\s\n\n\s\2\b\a\3\7\5\3\1\1\g\y\7\r\4\1\0\a\z\4\w\n\t\5\8\3\n\l\m\m\1\j\y\1\2\f ]] 00:13:48.234 00:13:48.234 real 0m0.474s 00:13:48.234 user 0m0.261s 00:13:48.234 sys 0m0.090s 00:13:48.234 14:32:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:48.234 14:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:48.234 14:32:56 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:13:48.234 14:32:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:48.234 14:32:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.234 14:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:48.234 ************************************ 00:13:48.492 START TEST dd_flag_directory_forced_aio 00:13:48.492 ************************************ 00:13:48.492 14:32:56 -- common/autotest_common.sh@1111 -- # directory 00:13:48.492 14:32:56 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:48.492 14:32:56 -- common/autotest_common.sh@638 -- # local es=0 00:13:48.492 14:32:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:48.492 14:32:56 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:48.492 14:32:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:48.492 14:32:56 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:48.492 14:32:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:48.492 14:32:56 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:48.492 14:32:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:48.492 14:32:56 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:48.492 14:32:56 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:48.492 14:32:56 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:48.492 [2024-04-17 14:32:56.888674] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:48.492 [2024-04-17 14:32:56.888800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62964 ] 00:13:48.492 [2024-04-17 14:32:57.030844] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.492 [2024-04-17 14:32:57.091038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.750 [2024-04-17 14:32:57.139367] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:48.750 [2024-04-17 14:32:57.139427] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:48.750 [2024-04-17 14:32:57.139442] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:48.750 [2024-04-17 14:32:57.207688] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:48.750 14:32:57 -- common/autotest_common.sh@641 -- # es=236 00:13:48.750 14:32:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:48.750 14:32:57 -- common/autotest_common.sh@650 -- # es=108 00:13:48.750 14:32:57 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:48.750 14:32:57 -- common/autotest_common.sh@658 -- # es=1 00:13:48.750 14:32:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:48.750 14:32:57 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:48.750 14:32:57 -- common/autotest_common.sh@638 -- # local es=0 00:13:48.750 14:32:57 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:48.750 14:32:57 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:48.750 14:32:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:48.750 14:32:57 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:48.750 14:32:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:48.750 14:32:57 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:48.750 14:32:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:48.750 14:32:57 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:48.750 14:32:57 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:48.750 14:32:57 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:49.008 [2024-04-17 14:32:57.395933] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:49.008 [2024-04-17 14:32:57.396080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62974 ] 00:13:49.008 [2024-04-17 14:32:57.535425] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.267 [2024-04-17 14:32:57.620898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.267 [2024-04-17 14:32:57.676981] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:49.267 [2024-04-17 14:32:57.677058] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:49.267 [2024-04-17 14:32:57.677078] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:49.267 [2024-04-17 14:32:57.750837] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:49.267 14:32:57 -- common/autotest_common.sh@641 -- # es=236 00:13:49.267 14:32:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:49.267 14:32:57 -- common/autotest_common.sh@650 -- # es=108 00:13:49.267 ************************************ 00:13:49.267 END TEST dd_flag_directory_forced_aio 00:13:49.267 ************************************ 00:13:49.267 14:32:57 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:49.267 14:32:57 -- common/autotest_common.sh@658 -- # es=1 00:13:49.267 14:32:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:49.267 00:13:49.267 real 0m1.027s 00:13:49.267 user 0m0.621s 00:13:49.267 sys 0m0.195s 00:13:49.267 14:32:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.267 14:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:49.527 14:32:57 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:13:49.527 14:32:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:49.527 14:32:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:49.527 14:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:49.527 ************************************ 00:13:49.527 START TEST dd_flag_nofollow_forced_aio 00:13:49.527 ************************************ 00:13:49.527 14:32:57 -- common/autotest_common.sh@1111 -- # nofollow 00:13:49.527 14:32:57 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:49.527 14:32:57 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:49.527 14:32:57 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:49.527 14:32:57 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:49.527 14:32:57 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:49.527 14:32:57 -- common/autotest_common.sh@638 -- # local es=0 00:13:49.527 14:32:57 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:49.527 14:32:57 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.527 14:32:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.527 14:32:57 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.527 14:32:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.527 14:32:57 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.527 14:32:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.527 14:32:57 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.527 14:32:57 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:49.527 14:32:57 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:49.527 [2024-04-17 14:32:58.029367] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:49.527 [2024-04-17 14:32:58.029498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63007 ] 00:13:49.789 [2024-04-17 14:32:58.168858] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.789 [2024-04-17 14:32:58.252239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.789 [2024-04-17 14:32:58.311212] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:49.789 [2024-04-17 14:32:58.311296] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:49.789 [2024-04-17 14:32:58.311323] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:49.789 [2024-04-17 14:32:58.389835] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:50.047 14:32:58 -- common/autotest_common.sh@641 -- # es=216 00:13:50.047 14:32:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:50.047 14:32:58 -- common/autotest_common.sh@650 -- # es=88 00:13:50.048 14:32:58 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:50.048 14:32:58 -- common/autotest_common.sh@658 -- # es=1 00:13:50.048 14:32:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:50.048 14:32:58 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:50.048 14:32:58 -- common/autotest_common.sh@638 -- # local es=0 00:13:50.048 14:32:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:50.048 14:32:58 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:50.048 14:32:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:50.048 14:32:58 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:50.048 14:32:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:50.048 14:32:58 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:50.048 14:32:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:50.048 14:32:58 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:50.048 14:32:58 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:50.048 14:32:58 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:50.048 [2024-04-17 14:32:58.585499] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:50.048 [2024-04-17 14:32:58.585860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63016 ] 00:13:50.306 [2024-04-17 14:32:58.724314] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.306 [2024-04-17 14:32:58.809418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.306 [2024-04-17 14:32:58.869864] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:50.306 [2024-04-17 14:32:58.869936] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:50.306 [2024-04-17 14:32:58.869978] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:50.564 [2024-04-17 14:32:58.949758] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:13:50.564 14:32:59 -- common/autotest_common.sh@641 -- # es=216 00:13:50.564 14:32:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:50.564 14:32:59 -- common/autotest_common.sh@650 -- # es=88 00:13:50.564 14:32:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:50.564 14:32:59 -- common/autotest_common.sh@658 -- # es=1 00:13:50.564 14:32:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:50.564 14:32:59 -- dd/posix.sh@46 -- # gen_bytes 512 00:13:50.564 14:32:59 -- dd/common.sh@98 -- # xtrace_disable 00:13:50.564 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:50.564 14:32:59 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:50.564 [2024-04-17 14:32:59.153483] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:50.564 [2024-04-17 14:32:59.153849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63029 ] 00:13:50.822 [2024-04-17 14:32:59.298010] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.822 [2024-04-17 14:32:59.357938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.080  Copying: 512/512 [B] (average 500 kBps) 00:13:51.080 00:13:51.080 14:32:59 -- dd/posix.sh@49 -- # [[ ulr28udcz6z6g213kx570tsa6p1wietppfa6pltbn1ez6gfhlvsdwrfha1pbn67rp5l1kldakzpeslxv7i9798mg3xbt194526gw27h9rj6kquo2359t025mb1hjvqdbvcik9j4hf5j32g6ygsc6t04fmnz7aphzp9lk7g4big53gq4j09seubav9hwzxxf6tvhni7taynkzn4zlmm139xidi2vq8i09amlmqo7qdqn5v2mopbf6xp8m807p337jzsx04o7169gq0zm3h01ycn0tt4gpwyukzc0ya1ptni3dhdlurebsdhcpy42er42pyyu382z8po4xw3p52fcbwcaszv0easu8va2jey2y3xor6zb86a3ci1rql44barp7c90ajrxfl22e20bqb6bxe7iwxtq6l8nkj644s64xn6b1xqi15n7m80t7yvgdu823hr0tq6exoai29we4y6ecy3pi42jnrz555mxlru5j1tm0qhufaruhikwt9uwv4q78 == \u\l\r\2\8\u\d\c\z\6\z\6\g\2\1\3\k\x\5\7\0\t\s\a\6\p\1\w\i\e\t\p\p\f\a\6\p\l\t\b\n\1\e\z\6\g\f\h\l\v\s\d\w\r\f\h\a\1\p\b\n\6\7\r\p\5\l\1\k\l\d\a\k\z\p\e\s\l\x\v\7\i\9\7\9\8\m\g\3\x\b\t\1\9\4\5\2\6\g\w\2\7\h\9\r\j\6\k\q\u\o\2\3\5\9\t\0\2\5\m\b\1\h\j\v\q\d\b\v\c\i\k\9\j\4\h\f\5\j\3\2\g\6\y\g\s\c\6\t\0\4\f\m\n\z\7\a\p\h\z\p\9\l\k\7\g\4\b\i\g\5\3\g\q\4\j\0\9\s\e\u\b\a\v\9\h\w\z\x\x\f\6\t\v\h\n\i\7\t\a\y\n\k\z\n\4\z\l\m\m\1\3\9\x\i\d\i\2\v\q\8\i\0\9\a\m\l\m\q\o\7\q\d\q\n\5\v\2\m\o\p\b\f\6\x\p\8\m\8\0\7\p\3\3\7\j\z\s\x\0\4\o\7\1\6\9\g\q\0\z\m\3\h\0\1\y\c\n\0\t\t\4\g\p\w\y\u\k\z\c\0\y\a\1\p\t\n\i\3\d\h\d\l\u\r\e\b\s\d\h\c\p\y\4\2\e\r\4\2\p\y\y\u\3\8\2\z\8\p\o\4\x\w\3\p\5\2\f\c\b\w\c\a\s\z\v\0\e\a\s\u\8\v\a\2\j\e\y\2\y\3\x\o\r\6\z\b\8\6\a\3\c\i\1\r\q\l\4\4\b\a\r\p\7\c\9\0\a\j\r\x\f\l\2\2\e\2\0\b\q\b\6\b\x\e\7\i\w\x\t\q\6\l\8\n\k\j\6\4\4\s\6\4\x\n\6\b\1\x\q\i\1\5\n\7\m\8\0\t\7\y\v\g\d\u\8\2\3\h\r\0\t\q\6\e\x\o\a\i\2\9\w\e\4\y\6\e\c\y\3\p\i\4\2\j\n\r\z\5\5\5\m\x\l\r\u\5\j\1\t\m\0\q\h\u\f\a\r\u\h\i\k\w\t\9\u\w\v\4\q\7\8 ]] 00:13:51.080 00:13:51.080 real 0m1.644s 00:13:51.080 user 0m0.970s 00:13:51.080 sys 0m0.330s 00:13:51.080 14:32:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:51.080 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:51.080 ************************************ 00:13:51.080 END TEST dd_flag_nofollow_forced_aio 00:13:51.080 ************************************ 00:13:51.080 14:32:59 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:13:51.080 14:32:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:51.080 14:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.080 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:51.338 ************************************ 00:13:51.338 START TEST dd_flag_noatime_forced_aio 00:13:51.338 ************************************ 00:13:51.338 14:32:59 -- common/autotest_common.sh@1111 -- # noatime 00:13:51.338 14:32:59 -- dd/posix.sh@53 -- # local atime_if 00:13:51.338 14:32:59 -- dd/posix.sh@54 -- # local atime_of 00:13:51.338 14:32:59 -- dd/posix.sh@58 -- # gen_bytes 512 00:13:51.338 14:32:59 -- dd/common.sh@98 -- # xtrace_disable 00:13:51.338 14:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:51.338 14:32:59 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:51.338 14:32:59 -- dd/posix.sh@60 -- # atime_if=1713364379 00:13:51.338 14:32:59 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:51.338 14:32:59 -- dd/posix.sh@61 -- # atime_of=1713364379 00:13:51.339 14:32:59 -- dd/posix.sh@66 -- # sleep 1 00:13:52.272 14:33:00 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:52.272 [2024-04-17 14:33:00.782198] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:52.272 [2024-04-17 14:33:00.782313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63068 ] 00:13:52.531 [2024-04-17 14:33:00.930154] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.531 [2024-04-17 14:33:00.995747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.789  Copying: 512/512 [B] (average 500 kBps) 00:13:52.789 00:13:52.789 14:33:01 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:52.789 14:33:01 -- dd/posix.sh@69 -- # (( atime_if == 1713364379 )) 00:13:52.789 14:33:01 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:52.789 14:33:01 -- dd/posix.sh@70 -- # (( atime_of == 1713364379 )) 00:13:52.789 14:33:01 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:52.789 [2024-04-17 14:33:01.292263] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:52.789 [2024-04-17 14:33:01.292365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63085 ] 00:13:53.047 [2024-04-17 14:33:01.426850] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.047 [2024-04-17 14:33:01.505000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.305  Copying: 512/512 [B] (average 500 kBps) 00:13:53.305 00:13:53.305 14:33:01 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:53.306 ************************************ 00:13:53.306 END TEST dd_flag_noatime_forced_aio 00:13:53.306 ************************************ 00:13:53.306 14:33:01 -- dd/posix.sh@73 -- # (( atime_if < 1713364381 )) 00:13:53.306 00:13:53.306 real 0m2.073s 00:13:53.306 user 0m0.606s 00:13:53.306 sys 0m0.218s 00:13:53.306 14:33:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:53.306 14:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.306 14:33:01 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:13:53.306 14:33:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:53.306 14:33:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.306 14:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.306 ************************************ 00:13:53.306 START TEST dd_flags_misc_forced_aio 00:13:53.306 ************************************ 00:13:53.306 14:33:01 -- common/autotest_common.sh@1111 -- # io 00:13:53.306 14:33:01 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:13:53.306 14:33:01 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:13:53.306 14:33:01 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:13:53.306 14:33:01 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:53.306 14:33:01 -- dd/posix.sh@86 -- # gen_bytes 512 00:13:53.306 14:33:01 -- dd/common.sh@98 -- # xtrace_disable 00:13:53.306 14:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:53.306 14:33:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:53.306 14:33:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:53.564 [2024-04-17 14:33:01.932197] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:53.564 [2024-04-17 14:33:01.932303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63111 ] 00:13:53.564 [2024-04-17 14:33:02.063458] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.564 [2024-04-17 14:33:02.142035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.822  Copying: 512/512 [B] (average 500 kBps) 00:13:53.822 00:13:53.822 14:33:02 -- dd/posix.sh@93 -- # [[ rl8mmx7copxewaqmpwcqmk8dfscpoj62aputmam7zt2jooso9rrq5u9v3dfu4ewug1hlh6ytmjizbtf7bd3r1mnyyyd6r2xdmqei058njzah7fsjranhv0rsgeelnpzrctctlxuux1z4djwlagmb67jjd0hffs4as6ho40ml9lgixek7wxyhgazfqq7f3q6oh1ko7ck7tcu7ddrii4y7o6vzyqcib0djetkdv8ofn6oqsa88syf0yk5d5tdpaixjid04qwc7gwh65gxqbxop0kewj2s6kubps60bbokbg8x7qb6yvxnir3cs033st592n66o578l8wpm24a5qn6it1kz5jnjdzc4wq7006kkbs0vgptg0probl95j7drfr0drwyt8evzj9emcvy2ez270k5ecxh39l0g76ph9ja75a8gz5szcmt5fo2uwmfik6qvdeh54ywxftp05i1wf4bnqwxum60kax76ef1xcvb1s4ikjayrgiv150vpjmt8b5bp == \r\l\8\m\m\x\7\c\o\p\x\e\w\a\q\m\p\w\c\q\m\k\8\d\f\s\c\p\o\j\6\2\a\p\u\t\m\a\m\7\z\t\2\j\o\o\s\o\9\r\r\q\5\u\9\v\3\d\f\u\4\e\w\u\g\1\h\l\h\6\y\t\m\j\i\z\b\t\f\7\b\d\3\r\1\m\n\y\y\y\d\6\r\2\x\d\m\q\e\i\0\5\8\n\j\z\a\h\7\f\s\j\r\a\n\h\v\0\r\s\g\e\e\l\n\p\z\r\c\t\c\t\l\x\u\u\x\1\z\4\d\j\w\l\a\g\m\b\6\7\j\j\d\0\h\f\f\s\4\a\s\6\h\o\4\0\m\l\9\l\g\i\x\e\k\7\w\x\y\h\g\a\z\f\q\q\7\f\3\q\6\o\h\1\k\o\7\c\k\7\t\c\u\7\d\d\r\i\i\4\y\7\o\6\v\z\y\q\c\i\b\0\d\j\e\t\k\d\v\8\o\f\n\6\o\q\s\a\8\8\s\y\f\0\y\k\5\d\5\t\d\p\a\i\x\j\i\d\0\4\q\w\c\7\g\w\h\6\5\g\x\q\b\x\o\p\0\k\e\w\j\2\s\6\k\u\b\p\s\6\0\b\b\o\k\b\g\8\x\7\q\b\6\y\v\x\n\i\r\3\c\s\0\3\3\s\t\5\9\2\n\6\6\o\5\7\8\l\8\w\p\m\2\4\a\5\q\n\6\i\t\1\k\z\5\j\n\j\d\z\c\4\w\q\7\0\0\6\k\k\b\s\0\v\g\p\t\g\0\p\r\o\b\l\9\5\j\7\d\r\f\r\0\d\r\w\y\t\8\e\v\z\j\9\e\m\c\v\y\2\e\z\2\7\0\k\5\e\c\x\h\3\9\l\0\g\7\6\p\h\9\j\a\7\5\a\8\g\z\5\s\z\c\m\t\5\f\o\2\u\w\m\f\i\k\6\q\v\d\e\h\5\4\y\w\x\f\t\p\0\5\i\1\w\f\4\b\n\q\w\x\u\m\6\0\k\a\x\7\6\e\f\1\x\c\v\b\1\s\4\i\k\j\a\y\r\g\i\v\1\5\0\v\p\j\m\t\8\b\5\b\p ]] 00:13:53.822 14:33:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:53.822 14:33:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:54.082 [2024-04-17 14:33:02.429144] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:54.082 [2024-04-17 14:33:02.429235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63124 ] 00:13:54.082 [2024-04-17 14:33:02.560447] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.082 [2024-04-17 14:33:02.632590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.350  Copying: 512/512 [B] (average 500 kBps) 00:13:54.350 00:13:54.351 14:33:02 -- dd/posix.sh@93 -- # [[ rl8mmx7copxewaqmpwcqmk8dfscpoj62aputmam7zt2jooso9rrq5u9v3dfu4ewug1hlh6ytmjizbtf7bd3r1mnyyyd6r2xdmqei058njzah7fsjranhv0rsgeelnpzrctctlxuux1z4djwlagmb67jjd0hffs4as6ho40ml9lgixek7wxyhgazfqq7f3q6oh1ko7ck7tcu7ddrii4y7o6vzyqcib0djetkdv8ofn6oqsa88syf0yk5d5tdpaixjid04qwc7gwh65gxqbxop0kewj2s6kubps60bbokbg8x7qb6yvxnir3cs033st592n66o578l8wpm24a5qn6it1kz5jnjdzc4wq7006kkbs0vgptg0probl95j7drfr0drwyt8evzj9emcvy2ez270k5ecxh39l0g76ph9ja75a8gz5szcmt5fo2uwmfik6qvdeh54ywxftp05i1wf4bnqwxum60kax76ef1xcvb1s4ikjayrgiv150vpjmt8b5bp == \r\l\8\m\m\x\7\c\o\p\x\e\w\a\q\m\p\w\c\q\m\k\8\d\f\s\c\p\o\j\6\2\a\p\u\t\m\a\m\7\z\t\2\j\o\o\s\o\9\r\r\q\5\u\9\v\3\d\f\u\4\e\w\u\g\1\h\l\h\6\y\t\m\j\i\z\b\t\f\7\b\d\3\r\1\m\n\y\y\y\d\6\r\2\x\d\m\q\e\i\0\5\8\n\j\z\a\h\7\f\s\j\r\a\n\h\v\0\r\s\g\e\e\l\n\p\z\r\c\t\c\t\l\x\u\u\x\1\z\4\d\j\w\l\a\g\m\b\6\7\j\j\d\0\h\f\f\s\4\a\s\6\h\o\4\0\m\l\9\l\g\i\x\e\k\7\w\x\y\h\g\a\z\f\q\q\7\f\3\q\6\o\h\1\k\o\7\c\k\7\t\c\u\7\d\d\r\i\i\4\y\7\o\6\v\z\y\q\c\i\b\0\d\j\e\t\k\d\v\8\o\f\n\6\o\q\s\a\8\8\s\y\f\0\y\k\5\d\5\t\d\p\a\i\x\j\i\d\0\4\q\w\c\7\g\w\h\6\5\g\x\q\b\x\o\p\0\k\e\w\j\2\s\6\k\u\b\p\s\6\0\b\b\o\k\b\g\8\x\7\q\b\6\y\v\x\n\i\r\3\c\s\0\3\3\s\t\5\9\2\n\6\6\o\5\7\8\l\8\w\p\m\2\4\a\5\q\n\6\i\t\1\k\z\5\j\n\j\d\z\c\4\w\q\7\0\0\6\k\k\b\s\0\v\g\p\t\g\0\p\r\o\b\l\9\5\j\7\d\r\f\r\0\d\r\w\y\t\8\e\v\z\j\9\e\m\c\v\y\2\e\z\2\7\0\k\5\e\c\x\h\3\9\l\0\g\7\6\p\h\9\j\a\7\5\a\8\g\z\5\s\z\c\m\t\5\f\o\2\u\w\m\f\i\k\6\q\v\d\e\h\5\4\y\w\x\f\t\p\0\5\i\1\w\f\4\b\n\q\w\x\u\m\6\0\k\a\x\7\6\e\f\1\x\c\v\b\1\s\4\i\k\j\a\y\r\g\i\v\1\5\0\v\p\j\m\t\8\b\5\b\p ]] 00:13:54.351 14:33:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:54.351 14:33:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:54.351 [2024-04-17 14:33:02.938211] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:54.351 [2024-04-17 14:33:02.938304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63126 ] 00:13:54.611 [2024-04-17 14:33:03.072527] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.611 [2024-04-17 14:33:03.138360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.870  Copying: 512/512 [B] (average 250 kBps) 00:13:54.870 00:13:54.870 14:33:03 -- dd/posix.sh@93 -- # [[ rl8mmx7copxewaqmpwcqmk8dfscpoj62aputmam7zt2jooso9rrq5u9v3dfu4ewug1hlh6ytmjizbtf7bd3r1mnyyyd6r2xdmqei058njzah7fsjranhv0rsgeelnpzrctctlxuux1z4djwlagmb67jjd0hffs4as6ho40ml9lgixek7wxyhgazfqq7f3q6oh1ko7ck7tcu7ddrii4y7o6vzyqcib0djetkdv8ofn6oqsa88syf0yk5d5tdpaixjid04qwc7gwh65gxqbxop0kewj2s6kubps60bbokbg8x7qb6yvxnir3cs033st592n66o578l8wpm24a5qn6it1kz5jnjdzc4wq7006kkbs0vgptg0probl95j7drfr0drwyt8evzj9emcvy2ez270k5ecxh39l0g76ph9ja75a8gz5szcmt5fo2uwmfik6qvdeh54ywxftp05i1wf4bnqwxum60kax76ef1xcvb1s4ikjayrgiv150vpjmt8b5bp == \r\l\8\m\m\x\7\c\o\p\x\e\w\a\q\m\p\w\c\q\m\k\8\d\f\s\c\p\o\j\6\2\a\p\u\t\m\a\m\7\z\t\2\j\o\o\s\o\9\r\r\q\5\u\9\v\3\d\f\u\4\e\w\u\g\1\h\l\h\6\y\t\m\j\i\z\b\t\f\7\b\d\3\r\1\m\n\y\y\y\d\6\r\2\x\d\m\q\e\i\0\5\8\n\j\z\a\h\7\f\s\j\r\a\n\h\v\0\r\s\g\e\e\l\n\p\z\r\c\t\c\t\l\x\u\u\x\1\z\4\d\j\w\l\a\g\m\b\6\7\j\j\d\0\h\f\f\s\4\a\s\6\h\o\4\0\m\l\9\l\g\i\x\e\k\7\w\x\y\h\g\a\z\f\q\q\7\f\3\q\6\o\h\1\k\o\7\c\k\7\t\c\u\7\d\d\r\i\i\4\y\7\o\6\v\z\y\q\c\i\b\0\d\j\e\t\k\d\v\8\o\f\n\6\o\q\s\a\8\8\s\y\f\0\y\k\5\d\5\t\d\p\a\i\x\j\i\d\0\4\q\w\c\7\g\w\h\6\5\g\x\q\b\x\o\p\0\k\e\w\j\2\s\6\k\u\b\p\s\6\0\b\b\o\k\b\g\8\x\7\q\b\6\y\v\x\n\i\r\3\c\s\0\3\3\s\t\5\9\2\n\6\6\o\5\7\8\l\8\w\p\m\2\4\a\5\q\n\6\i\t\1\k\z\5\j\n\j\d\z\c\4\w\q\7\0\0\6\k\k\b\s\0\v\g\p\t\g\0\p\r\o\b\l\9\5\j\7\d\r\f\r\0\d\r\w\y\t\8\e\v\z\j\9\e\m\c\v\y\2\e\z\2\7\0\k\5\e\c\x\h\3\9\l\0\g\7\6\p\h\9\j\a\7\5\a\8\g\z\5\s\z\c\m\t\5\f\o\2\u\w\m\f\i\k\6\q\v\d\e\h\5\4\y\w\x\f\t\p\0\5\i\1\w\f\4\b\n\q\w\x\u\m\6\0\k\a\x\7\6\e\f\1\x\c\v\b\1\s\4\i\k\j\a\y\r\g\i\v\1\5\0\v\p\j\m\t\8\b\5\b\p ]] 00:13:54.870 14:33:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:54.870 14:33:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:13:54.870 [2024-04-17 14:33:03.466770] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:54.870 [2024-04-17 14:33:03.466859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63139 ] 00:13:55.128 [2024-04-17 14:33:03.598209] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.128 [2024-04-17 14:33:03.682742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.386  Copying: 512/512 [B] (average 166 kBps) 00:13:55.386 00:13:55.386 14:33:03 -- dd/posix.sh@93 -- # [[ rl8mmx7copxewaqmpwcqmk8dfscpoj62aputmam7zt2jooso9rrq5u9v3dfu4ewug1hlh6ytmjizbtf7bd3r1mnyyyd6r2xdmqei058njzah7fsjranhv0rsgeelnpzrctctlxuux1z4djwlagmb67jjd0hffs4as6ho40ml9lgixek7wxyhgazfqq7f3q6oh1ko7ck7tcu7ddrii4y7o6vzyqcib0djetkdv8ofn6oqsa88syf0yk5d5tdpaixjid04qwc7gwh65gxqbxop0kewj2s6kubps60bbokbg8x7qb6yvxnir3cs033st592n66o578l8wpm24a5qn6it1kz5jnjdzc4wq7006kkbs0vgptg0probl95j7drfr0drwyt8evzj9emcvy2ez270k5ecxh39l0g76ph9ja75a8gz5szcmt5fo2uwmfik6qvdeh54ywxftp05i1wf4bnqwxum60kax76ef1xcvb1s4ikjayrgiv150vpjmt8b5bp == \r\l\8\m\m\x\7\c\o\p\x\e\w\a\q\m\p\w\c\q\m\k\8\d\f\s\c\p\o\j\6\2\a\p\u\t\m\a\m\7\z\t\2\j\o\o\s\o\9\r\r\q\5\u\9\v\3\d\f\u\4\e\w\u\g\1\h\l\h\6\y\t\m\j\i\z\b\t\f\7\b\d\3\r\1\m\n\y\y\y\d\6\r\2\x\d\m\q\e\i\0\5\8\n\j\z\a\h\7\f\s\j\r\a\n\h\v\0\r\s\g\e\e\l\n\p\z\r\c\t\c\t\l\x\u\u\x\1\z\4\d\j\w\l\a\g\m\b\6\7\j\j\d\0\h\f\f\s\4\a\s\6\h\o\4\0\m\l\9\l\g\i\x\e\k\7\w\x\y\h\g\a\z\f\q\q\7\f\3\q\6\o\h\1\k\o\7\c\k\7\t\c\u\7\d\d\r\i\i\4\y\7\o\6\v\z\y\q\c\i\b\0\d\j\e\t\k\d\v\8\o\f\n\6\o\q\s\a\8\8\s\y\f\0\y\k\5\d\5\t\d\p\a\i\x\j\i\d\0\4\q\w\c\7\g\w\h\6\5\g\x\q\b\x\o\p\0\k\e\w\j\2\s\6\k\u\b\p\s\6\0\b\b\o\k\b\g\8\x\7\q\b\6\y\v\x\n\i\r\3\c\s\0\3\3\s\t\5\9\2\n\6\6\o\5\7\8\l\8\w\p\m\2\4\a\5\q\n\6\i\t\1\k\z\5\j\n\j\d\z\c\4\w\q\7\0\0\6\k\k\b\s\0\v\g\p\t\g\0\p\r\o\b\l\9\5\j\7\d\r\f\r\0\d\r\w\y\t\8\e\v\z\j\9\e\m\c\v\y\2\e\z\2\7\0\k\5\e\c\x\h\3\9\l\0\g\7\6\p\h\9\j\a\7\5\a\8\g\z\5\s\z\c\m\t\5\f\o\2\u\w\m\f\i\k\6\q\v\d\e\h\5\4\y\w\x\f\t\p\0\5\i\1\w\f\4\b\n\q\w\x\u\m\6\0\k\a\x\7\6\e\f\1\x\c\v\b\1\s\4\i\k\j\a\y\r\g\i\v\1\5\0\v\p\j\m\t\8\b\5\b\p ]] 00:13:55.386 14:33:03 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:55.386 14:33:03 -- dd/posix.sh@86 -- # gen_bytes 512 00:13:55.386 14:33:03 -- dd/common.sh@98 -- # xtrace_disable 00:13:55.386 14:33:03 -- common/autotest_common.sh@10 -- # set +x 00:13:55.386 14:33:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:55.386 14:33:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:55.644 [2024-04-17 14:33:04.021838] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:55.644 [2024-04-17 14:33:04.021978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63141 ] 00:13:55.644 [2024-04-17 14:33:04.161117] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.644 [2024-04-17 14:33:04.238472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.160  Copying: 512/512 [B] (average 500 kBps) 00:13:56.160 00:13:56.161 14:33:04 -- dd/posix.sh@93 -- # [[ u8nnp744eerew62bvc68obwmq8uk2swwdvmdt93t38bpctnze8n40z5nug39ooebrkdxw3mjw4o1kivv7g0b7l3po2s4xicrf6m2braoyk951tywkau67kiv6y5uhcwskvnfe03p9qedg9vzpl7odgx6r5zapj62nfxa01wcnp1yde6p7ko5g7ahtely4244dt2vmetx5mf6xgi0pvu714xhtc2v4m2gw3g68t3xct7pbasbulbnj5rg1rfkr8e63gk6n9lhkmub4zx8jfnusjy39y3ir2dk12hwb4xia6gf7ry6jn5ngfrguuoblowg9ri1nme4k57v51tlwi9ank412kj0457hdqmxka6gdnotdi6clg6z6ygazzfo5jw1eb73drxe2ju53u946vjtcdsvc58j31zyc8w5ixm26wxcboqefw1cs464wyafz3l9oepnba3frtwej813uyji4vzrum2jyevlpx4bbglezjzc6q2ucjp85ftxtt3wiyy3 == \u\8\n\n\p\7\4\4\e\e\r\e\w\6\2\b\v\c\6\8\o\b\w\m\q\8\u\k\2\s\w\w\d\v\m\d\t\9\3\t\3\8\b\p\c\t\n\z\e\8\n\4\0\z\5\n\u\g\3\9\o\o\e\b\r\k\d\x\w\3\m\j\w\4\o\1\k\i\v\v\7\g\0\b\7\l\3\p\o\2\s\4\x\i\c\r\f\6\m\2\b\r\a\o\y\k\9\5\1\t\y\w\k\a\u\6\7\k\i\v\6\y\5\u\h\c\w\s\k\v\n\f\e\0\3\p\9\q\e\d\g\9\v\z\p\l\7\o\d\g\x\6\r\5\z\a\p\j\6\2\n\f\x\a\0\1\w\c\n\p\1\y\d\e\6\p\7\k\o\5\g\7\a\h\t\e\l\y\4\2\4\4\d\t\2\v\m\e\t\x\5\m\f\6\x\g\i\0\p\v\u\7\1\4\x\h\t\c\2\v\4\m\2\g\w\3\g\6\8\t\3\x\c\t\7\p\b\a\s\b\u\l\b\n\j\5\r\g\1\r\f\k\r\8\e\6\3\g\k\6\n\9\l\h\k\m\u\b\4\z\x\8\j\f\n\u\s\j\y\3\9\y\3\i\r\2\d\k\1\2\h\w\b\4\x\i\a\6\g\f\7\r\y\6\j\n\5\n\g\f\r\g\u\u\o\b\l\o\w\g\9\r\i\1\n\m\e\4\k\5\7\v\5\1\t\l\w\i\9\a\n\k\4\1\2\k\j\0\4\5\7\h\d\q\m\x\k\a\6\g\d\n\o\t\d\i\6\c\l\g\6\z\6\y\g\a\z\z\f\o\5\j\w\1\e\b\7\3\d\r\x\e\2\j\u\5\3\u\9\4\6\v\j\t\c\d\s\v\c\5\8\j\3\1\z\y\c\8\w\5\i\x\m\2\6\w\x\c\b\o\q\e\f\w\1\c\s\4\6\4\w\y\a\f\z\3\l\9\o\e\p\n\b\a\3\f\r\t\w\e\j\8\1\3\u\y\j\i\4\v\z\r\u\m\2\j\y\e\v\l\p\x\4\b\b\g\l\e\z\j\z\c\6\q\2\u\c\j\p\8\5\f\t\x\t\t\3\w\i\y\y\3 ]] 00:13:56.161 14:33:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:56.161 14:33:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:56.161 [2024-04-17 14:33:04.605733] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:56.161 [2024-04-17 14:33:04.605872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63154 ] 00:13:56.161 [2024-04-17 14:33:04.748387] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.418 [2024-04-17 14:33:04.853777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.677  Copying: 512/512 [B] (average 500 kBps) 00:13:56.677 00:13:56.677 14:33:05 -- dd/posix.sh@93 -- # [[ u8nnp744eerew62bvc68obwmq8uk2swwdvmdt93t38bpctnze8n40z5nug39ooebrkdxw3mjw4o1kivv7g0b7l3po2s4xicrf6m2braoyk951tywkau67kiv6y5uhcwskvnfe03p9qedg9vzpl7odgx6r5zapj62nfxa01wcnp1yde6p7ko5g7ahtely4244dt2vmetx5mf6xgi0pvu714xhtc2v4m2gw3g68t3xct7pbasbulbnj5rg1rfkr8e63gk6n9lhkmub4zx8jfnusjy39y3ir2dk12hwb4xia6gf7ry6jn5ngfrguuoblowg9ri1nme4k57v51tlwi9ank412kj0457hdqmxka6gdnotdi6clg6z6ygazzfo5jw1eb73drxe2ju53u946vjtcdsvc58j31zyc8w5ixm26wxcboqefw1cs464wyafz3l9oepnba3frtwej813uyji4vzrum2jyevlpx4bbglezjzc6q2ucjp85ftxtt3wiyy3 == \u\8\n\n\p\7\4\4\e\e\r\e\w\6\2\b\v\c\6\8\o\b\w\m\q\8\u\k\2\s\w\w\d\v\m\d\t\9\3\t\3\8\b\p\c\t\n\z\e\8\n\4\0\z\5\n\u\g\3\9\o\o\e\b\r\k\d\x\w\3\m\j\w\4\o\1\k\i\v\v\7\g\0\b\7\l\3\p\o\2\s\4\x\i\c\r\f\6\m\2\b\r\a\o\y\k\9\5\1\t\y\w\k\a\u\6\7\k\i\v\6\y\5\u\h\c\w\s\k\v\n\f\e\0\3\p\9\q\e\d\g\9\v\z\p\l\7\o\d\g\x\6\r\5\z\a\p\j\6\2\n\f\x\a\0\1\w\c\n\p\1\y\d\e\6\p\7\k\o\5\g\7\a\h\t\e\l\y\4\2\4\4\d\t\2\v\m\e\t\x\5\m\f\6\x\g\i\0\p\v\u\7\1\4\x\h\t\c\2\v\4\m\2\g\w\3\g\6\8\t\3\x\c\t\7\p\b\a\s\b\u\l\b\n\j\5\r\g\1\r\f\k\r\8\e\6\3\g\k\6\n\9\l\h\k\m\u\b\4\z\x\8\j\f\n\u\s\j\y\3\9\y\3\i\r\2\d\k\1\2\h\w\b\4\x\i\a\6\g\f\7\r\y\6\j\n\5\n\g\f\r\g\u\u\o\b\l\o\w\g\9\r\i\1\n\m\e\4\k\5\7\v\5\1\t\l\w\i\9\a\n\k\4\1\2\k\j\0\4\5\7\h\d\q\m\x\k\a\6\g\d\n\o\t\d\i\6\c\l\g\6\z\6\y\g\a\z\z\f\o\5\j\w\1\e\b\7\3\d\r\x\e\2\j\u\5\3\u\9\4\6\v\j\t\c\d\s\v\c\5\8\j\3\1\z\y\c\8\w\5\i\x\m\2\6\w\x\c\b\o\q\e\f\w\1\c\s\4\6\4\w\y\a\f\z\3\l\9\o\e\p\n\b\a\3\f\r\t\w\e\j\8\1\3\u\y\j\i\4\v\z\r\u\m\2\j\y\e\v\l\p\x\4\b\b\g\l\e\z\j\z\c\6\q\2\u\c\j\p\8\5\f\t\x\t\t\3\w\i\y\y\3 ]] 00:13:56.677 14:33:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:56.677 14:33:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:56.677 [2024-04-17 14:33:05.179617] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:56.677 [2024-04-17 14:33:05.179760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63162 ] 00:13:56.935 [2024-04-17 14:33:05.319067] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.935 [2024-04-17 14:33:05.400545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.194  Copying: 512/512 [B] (average 250 kBps) 00:13:57.194 00:13:57.194 14:33:05 -- dd/posix.sh@93 -- # [[ u8nnp744eerew62bvc68obwmq8uk2swwdvmdt93t38bpctnze8n40z5nug39ooebrkdxw3mjw4o1kivv7g0b7l3po2s4xicrf6m2braoyk951tywkau67kiv6y5uhcwskvnfe03p9qedg9vzpl7odgx6r5zapj62nfxa01wcnp1yde6p7ko5g7ahtely4244dt2vmetx5mf6xgi0pvu714xhtc2v4m2gw3g68t3xct7pbasbulbnj5rg1rfkr8e63gk6n9lhkmub4zx8jfnusjy39y3ir2dk12hwb4xia6gf7ry6jn5ngfrguuoblowg9ri1nme4k57v51tlwi9ank412kj0457hdqmxka6gdnotdi6clg6z6ygazzfo5jw1eb73drxe2ju53u946vjtcdsvc58j31zyc8w5ixm26wxcboqefw1cs464wyafz3l9oepnba3frtwej813uyji4vzrum2jyevlpx4bbglezjzc6q2ucjp85ftxtt3wiyy3 == \u\8\n\n\p\7\4\4\e\e\r\e\w\6\2\b\v\c\6\8\o\b\w\m\q\8\u\k\2\s\w\w\d\v\m\d\t\9\3\t\3\8\b\p\c\t\n\z\e\8\n\4\0\z\5\n\u\g\3\9\o\o\e\b\r\k\d\x\w\3\m\j\w\4\o\1\k\i\v\v\7\g\0\b\7\l\3\p\o\2\s\4\x\i\c\r\f\6\m\2\b\r\a\o\y\k\9\5\1\t\y\w\k\a\u\6\7\k\i\v\6\y\5\u\h\c\w\s\k\v\n\f\e\0\3\p\9\q\e\d\g\9\v\z\p\l\7\o\d\g\x\6\r\5\z\a\p\j\6\2\n\f\x\a\0\1\w\c\n\p\1\y\d\e\6\p\7\k\o\5\g\7\a\h\t\e\l\y\4\2\4\4\d\t\2\v\m\e\t\x\5\m\f\6\x\g\i\0\p\v\u\7\1\4\x\h\t\c\2\v\4\m\2\g\w\3\g\6\8\t\3\x\c\t\7\p\b\a\s\b\u\l\b\n\j\5\r\g\1\r\f\k\r\8\e\6\3\g\k\6\n\9\l\h\k\m\u\b\4\z\x\8\j\f\n\u\s\j\y\3\9\y\3\i\r\2\d\k\1\2\h\w\b\4\x\i\a\6\g\f\7\r\y\6\j\n\5\n\g\f\r\g\u\u\o\b\l\o\w\g\9\r\i\1\n\m\e\4\k\5\7\v\5\1\t\l\w\i\9\a\n\k\4\1\2\k\j\0\4\5\7\h\d\q\m\x\k\a\6\g\d\n\o\t\d\i\6\c\l\g\6\z\6\y\g\a\z\z\f\o\5\j\w\1\e\b\7\3\d\r\x\e\2\j\u\5\3\u\9\4\6\v\j\t\c\d\s\v\c\5\8\j\3\1\z\y\c\8\w\5\i\x\m\2\6\w\x\c\b\o\q\e\f\w\1\c\s\4\6\4\w\y\a\f\z\3\l\9\o\e\p\n\b\a\3\f\r\t\w\e\j\8\1\3\u\y\j\i\4\v\z\r\u\m\2\j\y\e\v\l\p\x\4\b\b\g\l\e\z\j\z\c\6\q\2\u\c\j\p\8\5\f\t\x\t\t\3\w\i\y\y\3 ]] 00:13:57.194 14:33:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:57.194 14:33:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:13:57.194 [2024-04-17 14:33:05.698531] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:57.194 [2024-04-17 14:33:05.698636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63169 ] 00:13:57.452 [2024-04-17 14:33:05.830538] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.452 [2024-04-17 14:33:05.889871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.711  Copying: 512/512 [B] (average 250 kBps) 00:13:57.711 00:13:57.711 14:33:06 -- dd/posix.sh@93 -- # [[ u8nnp744eerew62bvc68obwmq8uk2swwdvmdt93t38bpctnze8n40z5nug39ooebrkdxw3mjw4o1kivv7g0b7l3po2s4xicrf6m2braoyk951tywkau67kiv6y5uhcwskvnfe03p9qedg9vzpl7odgx6r5zapj62nfxa01wcnp1yde6p7ko5g7ahtely4244dt2vmetx5mf6xgi0pvu714xhtc2v4m2gw3g68t3xct7pbasbulbnj5rg1rfkr8e63gk6n9lhkmub4zx8jfnusjy39y3ir2dk12hwb4xia6gf7ry6jn5ngfrguuoblowg9ri1nme4k57v51tlwi9ank412kj0457hdqmxka6gdnotdi6clg6z6ygazzfo5jw1eb73drxe2ju53u946vjtcdsvc58j31zyc8w5ixm26wxcboqefw1cs464wyafz3l9oepnba3frtwej813uyji4vzrum2jyevlpx4bbglezjzc6q2ucjp85ftxtt3wiyy3 == \u\8\n\n\p\7\4\4\e\e\r\e\w\6\2\b\v\c\6\8\o\b\w\m\q\8\u\k\2\s\w\w\d\v\m\d\t\9\3\t\3\8\b\p\c\t\n\z\e\8\n\4\0\z\5\n\u\g\3\9\o\o\e\b\r\k\d\x\w\3\m\j\w\4\o\1\k\i\v\v\7\g\0\b\7\l\3\p\o\2\s\4\x\i\c\r\f\6\m\2\b\r\a\o\y\k\9\5\1\t\y\w\k\a\u\6\7\k\i\v\6\y\5\u\h\c\w\s\k\v\n\f\e\0\3\p\9\q\e\d\g\9\v\z\p\l\7\o\d\g\x\6\r\5\z\a\p\j\6\2\n\f\x\a\0\1\w\c\n\p\1\y\d\e\6\p\7\k\o\5\g\7\a\h\t\e\l\y\4\2\4\4\d\t\2\v\m\e\t\x\5\m\f\6\x\g\i\0\p\v\u\7\1\4\x\h\t\c\2\v\4\m\2\g\w\3\g\6\8\t\3\x\c\t\7\p\b\a\s\b\u\l\b\n\j\5\r\g\1\r\f\k\r\8\e\6\3\g\k\6\n\9\l\h\k\m\u\b\4\z\x\8\j\f\n\u\s\j\y\3\9\y\3\i\r\2\d\k\1\2\h\w\b\4\x\i\a\6\g\f\7\r\y\6\j\n\5\n\g\f\r\g\u\u\o\b\l\o\w\g\9\r\i\1\n\m\e\4\k\5\7\v\5\1\t\l\w\i\9\a\n\k\4\1\2\k\j\0\4\5\7\h\d\q\m\x\k\a\6\g\d\n\o\t\d\i\6\c\l\g\6\z\6\y\g\a\z\z\f\o\5\j\w\1\e\b\7\3\d\r\x\e\2\j\u\5\3\u\9\4\6\v\j\t\c\d\s\v\c\5\8\j\3\1\z\y\c\8\w\5\i\x\m\2\6\w\x\c\b\o\q\e\f\w\1\c\s\4\6\4\w\y\a\f\z\3\l\9\o\e\p\n\b\a\3\f\r\t\w\e\j\8\1\3\u\y\j\i\4\v\z\r\u\m\2\j\y\e\v\l\p\x\4\b\b\g\l\e\z\j\z\c\6\q\2\u\c\j\p\8\5\f\t\x\t\t\3\w\i\y\y\3 ]] 00:13:57.711 00:13:57.711 real 0m4.270s 00:13:57.711 user 0m2.489s 00:13:57.711 sys 0m0.786s 00:13:57.711 14:33:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.711 14:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.711 ************************************ 00:13:57.711 END TEST dd_flags_misc_forced_aio 00:13:57.711 ************************************ 00:13:57.711 14:33:06 -- dd/posix.sh@1 -- # cleanup 00:13:57.711 14:33:06 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:57.711 14:33:06 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:57.711 ************************************ 00:13:57.711 END TEST spdk_dd_posix 00:13:57.711 ************************************ 00:13:57.711 00:13:57.711 real 0m19.411s 00:13:57.711 user 0m9.916s 00:13:57.711 sys 0m4.721s 00:13:57.711 14:33:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.711 14:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.711 14:33:06 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:13:57.711 14:33:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:57.711 14:33:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.711 14:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.711 ************************************ 00:13:57.711 START TEST spdk_dd_malloc 00:13:57.711 ************************************ 00:13:57.711 14:33:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:13:57.969 * Looking for test storage... 00:13:57.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:57.969 14:33:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.969 14:33:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.969 14:33:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.969 14:33:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.969 14:33:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.969 14:33:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.969 14:33:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.969 14:33:06 -- paths/export.sh@5 -- # export PATH 00:13:57.969 14:33:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.969 14:33:06 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:13:57.969 14:33:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:57.969 14:33:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.969 14:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.969 ************************************ 00:13:57.969 START TEST dd_malloc_copy 00:13:57.969 ************************************ 00:13:57.969 14:33:06 -- common/autotest_common.sh@1111 -- # malloc_copy 00:13:57.969 14:33:06 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:13:57.969 14:33:06 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:13:57.969 14:33:06 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:13:57.969 14:33:06 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:13:57.969 14:33:06 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:13:57.969 14:33:06 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:13:57.969 14:33:06 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:13:57.969 14:33:06 -- dd/malloc.sh@28 -- # gen_conf 00:13:57.969 14:33:06 -- dd/common.sh@31 -- # xtrace_disable 00:13:57.969 14:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.969 [2024-04-17 14:33:06.485400] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:13:57.969 [2024-04-17 14:33:06.485527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63252 ] 00:13:57.969 { 00:13:57.969 "subsystems": [ 00:13:57.969 { 00:13:57.969 "subsystem": "bdev", 00:13:57.969 "config": [ 00:13:57.969 { 00:13:57.969 "params": { 00:13:57.969 "block_size": 512, 00:13:57.969 "num_blocks": 1048576, 00:13:57.969 "name": "malloc0" 00:13:57.969 }, 00:13:57.969 "method": "bdev_malloc_create" 00:13:57.969 }, 00:13:57.969 { 00:13:57.969 "params": { 00:13:57.969 "block_size": 512, 00:13:57.969 "num_blocks": 1048576, 00:13:57.969 "name": "malloc1" 00:13:57.969 }, 00:13:57.969 "method": "bdev_malloc_create" 00:13:57.969 }, 00:13:57.970 { 00:13:57.970 "method": "bdev_wait_for_examine" 00:13:57.970 } 00:13:57.970 ] 00:13:57.970 } 00:13:57.970 ] 00:13:57.970 } 00:13:58.229 [2024-04-17 14:33:06.623559] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.229 [2024-04-17 14:33:06.704513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.724  Copying: 181/512 [MB] (181 MBps) Copying: 374/512 [MB] (192 MBps) Copying: 512/512 [MB] (average 186 MBps) 00:14:01.724 00:14:01.724 14:33:10 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:14:01.724 14:33:10 -- dd/malloc.sh@33 -- # gen_conf 00:14:01.724 14:33:10 -- dd/common.sh@31 -- # xtrace_disable 00:14:01.724 14:33:10 -- common/autotest_common.sh@10 -- # set +x 00:14:01.724 { 00:14:01.724 "subsystems": [ 00:14:01.724 { 00:14:01.724 "subsystem": "bdev", 00:14:01.724 "config": [ 00:14:01.724 { 00:14:01.724 "params": { 00:14:01.724 "block_size": 512, 00:14:01.724 "num_blocks": 1048576, 00:14:01.724 "name": "malloc0" 00:14:01.724 }, 00:14:01.724 "method": "bdev_malloc_create" 00:14:01.724 }, 00:14:01.724 { 00:14:01.724 "params": { 00:14:01.724 "block_size": 512, 00:14:01.724 "num_blocks": 1048576, 00:14:01.724 "name": "malloc1" 00:14:01.724 }, 00:14:01.724 "method": "bdev_malloc_create" 00:14:01.724 }, 00:14:01.724 { 00:14:01.724 "method": "bdev_wait_for_examine" 00:14:01.724 } 00:14:01.724 ] 00:14:01.724 } 00:14:01.724 ] 00:14:01.724 } 00:14:01.724 [2024-04-17 14:33:10.191674] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:01.724 [2024-04-17 14:33:10.191813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63294 ] 00:14:01.981 [2024-04-17 14:33:10.332005] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.982 [2024-04-17 14:33:10.417804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.521  Copying: 172/512 [MB] (172 MBps) Copying: 348/512 [MB] (176 MBps) Copying: 512/512 [MB] (average 176 MBps) 00:14:05.521 00:14:05.521 00:14:05.521 real 0m7.555s 00:14:05.521 user 0m6.780s 00:14:05.521 sys 0m0.579s 00:14:05.521 14:33:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.521 ************************************ 00:14:05.521 END TEST dd_malloc_copy 00:14:05.521 ************************************ 00:14:05.521 14:33:13 -- common/autotest_common.sh@10 -- # set +x 00:14:05.521 00:14:05.521 real 0m7.742s 00:14:05.521 user 0m6.859s 00:14:05.521 sys 0m0.674s 00:14:05.521 14:33:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.521 ************************************ 00:14:05.521 END TEST spdk_dd_malloc 00:14:05.521 ************************************ 00:14:05.521 14:33:14 -- common/autotest_common.sh@10 -- # set +x 00:14:05.521 14:33:14 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:14:05.521 14:33:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:05.521 14:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.521 14:33:14 -- common/autotest_common.sh@10 -- # set +x 00:14:05.780 ************************************ 00:14:05.780 START TEST spdk_dd_bdev_to_bdev 00:14:05.780 ************************************ 00:14:05.780 14:33:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:14:05.780 * Looking for test storage... 00:14:05.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:05.780 14:33:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.780 14:33:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.780 14:33:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.780 14:33:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.780 14:33:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.780 14:33:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.780 14:33:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.780 14:33:14 -- paths/export.sh@5 -- # export PATH 00:14:05.780 14:33:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:14:05.780 14:33:14 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:14:05.780 14:33:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:05.780 14:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.780 14:33:14 -- common/autotest_common.sh@10 -- # set +x 00:14:05.780 ************************************ 00:14:05.780 START TEST dd_inflate_file 00:14:05.780 ************************************ 00:14:05.780 14:33:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:14:05.780 [2024-04-17 14:33:14.331842] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:05.780 [2024-04-17 14:33:14.332000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63420 ] 00:14:06.039 [2024-04-17 14:33:14.474001] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.039 [2024-04-17 14:33:14.538511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.297  Copying: 64/64 [MB] (average 1422 MBps) 00:14:06.297 00:14:06.297 00:14:06.297 real 0m0.589s 00:14:06.297 user 0m0.374s 00:14:06.297 sys 0m0.255s 00:14:06.297 14:33:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.297 14:33:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.297 ************************************ 00:14:06.297 END TEST dd_inflate_file 00:14:06.297 ************************************ 00:14:06.297 14:33:14 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:14:06.555 14:33:14 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:14:06.555 14:33:14 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:14:06.555 14:33:14 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:14:06.555 14:33:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:06.555 14:33:14 -- dd/common.sh@31 -- # xtrace_disable 00:14:06.555 14:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.555 14:33:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.555 14:33:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.555 ************************************ 00:14:06.555 START TEST dd_copy_to_out_bdev 00:14:06.555 ************************************ 00:14:06.555 14:33:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:14:06.555 { 00:14:06.555 "subsystems": [ 00:14:06.555 { 00:14:06.555 "subsystem": "bdev", 00:14:06.555 "config": [ 00:14:06.555 { 00:14:06.555 "params": { 00:14:06.555 "trtype": "pcie", 00:14:06.555 "traddr": "0000:00:10.0", 00:14:06.555 "name": "Nvme0" 00:14:06.555 }, 00:14:06.555 "method": "bdev_nvme_attach_controller" 00:14:06.555 }, 00:14:06.555 { 00:14:06.555 "params": { 00:14:06.555 "trtype": "pcie", 00:14:06.555 "traddr": "0000:00:11.0", 00:14:06.555 "name": "Nvme1" 00:14:06.555 }, 00:14:06.555 "method": "bdev_nvme_attach_controller" 00:14:06.555 }, 00:14:06.555 { 00:14:06.555 "method": "bdev_wait_for_examine" 00:14:06.555 } 00:14:06.555 ] 00:14:06.555 } 00:14:06.555 ] 00:14:06.555 } 00:14:06.555 [2024-04-17 14:33:15.024203] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:06.555 [2024-04-17 14:33:15.024348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63462 ] 00:14:06.814 [2024-04-17 14:33:15.163193] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.814 [2024-04-17 14:33:15.248548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.188  Copying: 64/64 [MB] (average 65 MBps) 00:14:08.188 00:14:08.188 00:14:08.188 real 0m1.718s 00:14:08.188 user 0m1.503s 00:14:08.188 sys 0m1.283s 00:14:08.188 14:33:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:08.188 14:33:16 -- common/autotest_common.sh@10 -- # set +x 00:14:08.188 ************************************ 00:14:08.188 END TEST dd_copy_to_out_bdev 00:14:08.188 ************************************ 00:14:08.188 14:33:16 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:14:08.188 14:33:16 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:14:08.188 14:33:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:08.188 14:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:08.188 14:33:16 -- common/autotest_common.sh@10 -- # set +x 00:14:08.189 ************************************ 00:14:08.189 START TEST dd_offset_magic 00:14:08.189 ************************************ 00:14:08.189 14:33:16 -- common/autotest_common.sh@1111 -- # offset_magic 00:14:08.189 14:33:16 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:14:08.189 14:33:16 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:14:08.189 14:33:16 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:14:08.189 14:33:16 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:14:08.189 14:33:16 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:14:08.189 14:33:16 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:14:08.189 14:33:16 -- dd/common.sh@31 -- # xtrace_disable 00:14:08.447 14:33:16 -- common/autotest_common.sh@10 -- # set +x 00:14:08.447 [2024-04-17 14:33:16.849228] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:08.447 [2024-04-17 14:33:16.849365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63505 ] 00:14:08.447 { 00:14:08.447 "subsystems": [ 00:14:08.447 { 00:14:08.447 "subsystem": "bdev", 00:14:08.447 "config": [ 00:14:08.447 { 00:14:08.447 "params": { 00:14:08.447 "trtype": "pcie", 00:14:08.447 "traddr": "0000:00:10.0", 00:14:08.447 "name": "Nvme0" 00:14:08.447 }, 00:14:08.447 "method": "bdev_nvme_attach_controller" 00:14:08.447 }, 00:14:08.447 { 00:14:08.447 "params": { 00:14:08.447 "trtype": "pcie", 00:14:08.447 "traddr": "0000:00:11.0", 00:14:08.447 "name": "Nvme1" 00:14:08.447 }, 00:14:08.447 "method": "bdev_nvme_attach_controller" 00:14:08.447 }, 00:14:08.447 { 00:14:08.447 "method": "bdev_wait_for_examine" 00:14:08.447 } 00:14:08.447 ] 00:14:08.447 } 00:14:08.447 ] 00:14:08.447 } 00:14:08.447 [2024-04-17 14:33:16.988492] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.755 [2024-04-17 14:33:17.060317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.019  Copying: 65/65 [MB] (average 1300 MBps) 00:14:09.019 00:14:09.019 14:33:17 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:14:09.019 14:33:17 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:14:09.019 14:33:17 -- dd/common.sh@31 -- # xtrace_disable 00:14:09.019 14:33:17 -- common/autotest_common.sh@10 -- # set +x 00:14:09.019 { 00:14:09.019 "subsystems": [ 00:14:09.019 { 00:14:09.019 "subsystem": "bdev", 00:14:09.020 "config": [ 00:14:09.020 { 00:14:09.020 "params": { 00:14:09.020 "trtype": "pcie", 00:14:09.020 "traddr": "0000:00:10.0", 00:14:09.020 "name": "Nvme0" 00:14:09.020 }, 00:14:09.020 "method": "bdev_nvme_attach_controller" 00:14:09.020 }, 00:14:09.020 { 00:14:09.020 "params": { 00:14:09.020 "trtype": "pcie", 00:14:09.020 "traddr": "0000:00:11.0", 00:14:09.020 "name": "Nvme1" 00:14:09.020 }, 00:14:09.020 "method": "bdev_nvme_attach_controller" 00:14:09.020 }, 00:14:09.020 { 00:14:09.020 "method": "bdev_wait_for_examine" 00:14:09.020 } 00:14:09.020 ] 00:14:09.020 } 00:14:09.020 ] 00:14:09.020 } 00:14:09.020 [2024-04-17 14:33:17.585605] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:09.020 [2024-04-17 14:33:17.585736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63525 ] 00:14:09.284 [2024-04-17 14:33:17.725212] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.284 [2024-04-17 14:33:17.802755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.801  Copying: 1024/1024 [kB] (average 1000 MBps) 00:14:09.801 00:14:09.801 14:33:18 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:14:09.801 14:33:18 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:14:09.801 14:33:18 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:14:09.801 14:33:18 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:14:09.801 14:33:18 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:14:09.801 14:33:18 -- dd/common.sh@31 -- # xtrace_disable 00:14:09.801 14:33:18 -- common/autotest_common.sh@10 -- # set +x 00:14:09.801 { 00:14:09.801 "subsystems": [ 00:14:09.801 { 00:14:09.801 "subsystem": "bdev", 00:14:09.801 "config": [ 00:14:09.801 { 00:14:09.801 "params": { 00:14:09.801 "trtype": "pcie", 00:14:09.801 "traddr": "0000:00:10.0", 00:14:09.801 "name": "Nvme0" 00:14:09.801 }, 00:14:09.801 "method": "bdev_nvme_attach_controller" 00:14:09.801 }, 00:14:09.801 { 00:14:09.801 "params": { 00:14:09.801 "trtype": "pcie", 00:14:09.801 "traddr": "0000:00:11.0", 00:14:09.801 "name": "Nvme1" 00:14:09.801 }, 00:14:09.801 "method": "bdev_nvme_attach_controller" 00:14:09.801 }, 00:14:09.801 { 00:14:09.801 "method": "bdev_wait_for_examine" 00:14:09.801 } 00:14:09.801 ] 00:14:09.801 } 00:14:09.801 ] 00:14:09.801 } 00:14:09.801 [2024-04-17 14:33:18.228616] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:09.801 [2024-04-17 14:33:18.228741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63541 ] 00:14:09.801 [2024-04-17 14:33:18.371856] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.059 [2024-04-17 14:33:18.430579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.317  Copying: 65/65 [MB] (average 1585 MBps) 00:14:10.317 00:14:10.317 14:33:18 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:14:10.317 14:33:18 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:14:10.317 14:33:18 -- dd/common.sh@31 -- # xtrace_disable 00:14:10.317 14:33:18 -- common/autotest_common.sh@10 -- # set +x 00:14:10.575 { 00:14:10.575 "subsystems": [ 00:14:10.575 { 00:14:10.575 "subsystem": "bdev", 00:14:10.575 "config": [ 00:14:10.575 { 00:14:10.575 "params": { 00:14:10.575 "trtype": "pcie", 00:14:10.575 "traddr": "0000:00:10.0", 00:14:10.575 "name": "Nvme0" 00:14:10.575 }, 00:14:10.575 "method": "bdev_nvme_attach_controller" 00:14:10.575 }, 00:14:10.575 { 00:14:10.575 "params": { 00:14:10.575 "trtype": "pcie", 00:14:10.575 "traddr": "0000:00:11.0", 00:14:10.575 "name": "Nvme1" 00:14:10.575 }, 00:14:10.575 "method": "bdev_nvme_attach_controller" 00:14:10.575 }, 00:14:10.575 { 00:14:10.575 "method": "bdev_wait_for_examine" 00:14:10.575 } 00:14:10.575 ] 00:14:10.575 } 00:14:10.575 ] 00:14:10.575 } 00:14:10.575 [2024-04-17 14:33:18.941407] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:10.575 [2024-04-17 14:33:18.941535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63556 ] 00:14:10.575 [2024-04-17 14:33:19.106897] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.834 [2024-04-17 14:33:19.189840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.093  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:11.093 00:14:11.093 14:33:19 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:14:11.093 14:33:19 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:14:11.093 00:14:11.093 real 0m2.783s 00:14:11.093 user 0m2.088s 00:14:11.093 sys 0m0.655s 00:14:11.093 14:33:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:11.093 14:33:19 -- common/autotest_common.sh@10 -- # set +x 00:14:11.093 ************************************ 00:14:11.093 END TEST dd_offset_magic 00:14:11.093 ************************************ 00:14:11.093 14:33:19 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:14:11.093 14:33:19 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:14:11.093 14:33:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:11.093 14:33:19 -- dd/common.sh@11 -- # local nvme_ref= 00:14:11.093 14:33:19 -- dd/common.sh@12 -- # local size=4194330 00:14:11.093 14:33:19 -- dd/common.sh@14 -- # local bs=1048576 00:14:11.093 14:33:19 -- dd/common.sh@15 -- # local count=5 00:14:11.094 14:33:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:14:11.094 14:33:19 -- dd/common.sh@18 -- # gen_conf 00:14:11.094 14:33:19 -- dd/common.sh@31 -- # xtrace_disable 00:14:11.094 14:33:19 -- common/autotest_common.sh@10 -- # set +x 00:14:11.094 [2024-04-17 14:33:19.656436] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:11.094 [2024-04-17 14:33:19.656520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63593 ] 00:14:11.094 { 00:14:11.094 "subsystems": [ 00:14:11.094 { 00:14:11.094 "subsystem": "bdev", 00:14:11.094 "config": [ 00:14:11.094 { 00:14:11.094 "params": { 00:14:11.094 "trtype": "pcie", 00:14:11.094 "traddr": "0000:00:10.0", 00:14:11.094 "name": "Nvme0" 00:14:11.094 }, 00:14:11.094 "method": "bdev_nvme_attach_controller" 00:14:11.094 }, 00:14:11.094 { 00:14:11.094 "params": { 00:14:11.094 "trtype": "pcie", 00:14:11.094 "traddr": "0000:00:11.0", 00:14:11.094 "name": "Nvme1" 00:14:11.094 }, 00:14:11.094 "method": "bdev_nvme_attach_controller" 00:14:11.094 }, 00:14:11.094 { 00:14:11.094 "method": "bdev_wait_for_examine" 00:14:11.094 } 00:14:11.094 ] 00:14:11.094 } 00:14:11.094 ] 00:14:11.094 } 00:14:11.353 [2024-04-17 14:33:19.791374] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.353 [2024-04-17 14:33:19.849247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.612  Copying: 5120/5120 [kB] (average 1666 MBps) 00:14:11.612 00:14:11.612 14:33:20 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:14:11.612 14:33:20 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:14:11.612 14:33:20 -- dd/common.sh@11 -- # local nvme_ref= 00:14:11.612 14:33:20 -- dd/common.sh@12 -- # local size=4194330 00:14:11.612 14:33:20 -- dd/common.sh@14 -- # local bs=1048576 00:14:11.612 14:33:20 -- dd/common.sh@15 -- # local count=5 00:14:11.612 14:33:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:14:11.870 14:33:20 -- dd/common.sh@18 -- # gen_conf 00:14:11.870 14:33:20 -- dd/common.sh@31 -- # xtrace_disable 00:14:11.870 14:33:20 -- common/autotest_common.sh@10 -- # set +x 00:14:11.870 [2024-04-17 14:33:20.263221] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:11.870 [2024-04-17 14:33:20.263316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63607 ] 00:14:11.870 { 00:14:11.870 "subsystems": [ 00:14:11.870 { 00:14:11.870 "subsystem": "bdev", 00:14:11.870 "config": [ 00:14:11.870 { 00:14:11.870 "params": { 00:14:11.870 "trtype": "pcie", 00:14:11.870 "traddr": "0000:00:10.0", 00:14:11.870 "name": "Nvme0" 00:14:11.870 }, 00:14:11.870 "method": "bdev_nvme_attach_controller" 00:14:11.870 }, 00:14:11.870 { 00:14:11.870 "params": { 00:14:11.870 "trtype": "pcie", 00:14:11.870 "traddr": "0000:00:11.0", 00:14:11.870 "name": "Nvme1" 00:14:11.870 }, 00:14:11.870 "method": "bdev_nvme_attach_controller" 00:14:11.870 }, 00:14:11.870 { 00:14:11.870 "method": "bdev_wait_for_examine" 00:14:11.870 } 00:14:11.870 ] 00:14:11.870 } 00:14:11.870 ] 00:14:11.870 } 00:14:11.871 [2024-04-17 14:33:20.399189] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.871 [2024-04-17 14:33:20.467574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.388  Copying: 5120/5120 [kB] (average 1000 MBps) 00:14:12.388 00:14:12.388 14:33:20 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:14:12.388 00:14:12.388 real 0m6.755s 00:14:12.388 user 0m5.054s 00:14:12.388 sys 0m2.768s 00:14:12.388 14:33:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:12.388 14:33:20 -- common/autotest_common.sh@10 -- # set +x 00:14:12.388 ************************************ 00:14:12.388 END TEST spdk_dd_bdev_to_bdev 00:14:12.388 ************************************ 00:14:12.388 14:33:20 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:14:12.388 14:33:20 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:14:12.388 14:33:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:12.388 14:33:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.388 14:33:20 -- common/autotest_common.sh@10 -- # set +x 00:14:12.651 ************************************ 00:14:12.651 START TEST spdk_dd_uring 00:14:12.651 ************************************ 00:14:12.651 14:33:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:14:12.651 * Looking for test storage... 00:14:12.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:12.651 14:33:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.651 14:33:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.651 14:33:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.651 14:33:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.651 14:33:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.651 14:33:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.651 14:33:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.651 14:33:21 -- paths/export.sh@5 -- # export PATH 00:14:12.651 14:33:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.651 14:33:21 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:14:12.652 14:33:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:12.652 14:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.652 14:33:21 -- common/autotest_common.sh@10 -- # set +x 00:14:12.652 ************************************ 00:14:12.652 START TEST dd_uring_copy 00:14:12.652 ************************************ 00:14:12.652 14:33:21 -- common/autotest_common.sh@1111 -- # uring_zram_copy 00:14:12.652 14:33:21 -- dd/uring.sh@15 -- # local zram_dev_id 00:14:12.652 14:33:21 -- dd/uring.sh@16 -- # local magic 00:14:12.652 14:33:21 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:14:12.652 14:33:21 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:12.652 14:33:21 -- dd/uring.sh@19 -- # local verify_magic 00:14:12.652 14:33:21 -- dd/uring.sh@21 -- # init_zram 00:14:12.652 14:33:21 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:14:12.652 14:33:21 -- dd/common.sh@164 -- # return 00:14:12.652 14:33:21 -- dd/uring.sh@22 -- # create_zram_dev 00:14:12.652 14:33:21 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:14:12.652 14:33:21 -- dd/uring.sh@22 -- # zram_dev_id=1 00:14:12.652 14:33:21 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:14:12.652 14:33:21 -- dd/common.sh@181 -- # local id=1 00:14:12.652 14:33:21 -- dd/common.sh@182 -- # local size=512M 00:14:12.652 14:33:21 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:14:12.652 14:33:21 -- dd/common.sh@186 -- # echo 512M 00:14:12.652 14:33:21 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:14:12.652 14:33:21 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:14:12.652 14:33:21 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:14:12.652 14:33:21 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:14:12.652 14:33:21 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:14:12.652 14:33:21 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:14:12.652 14:33:21 -- dd/uring.sh@41 -- # gen_bytes 1024 00:14:12.652 14:33:21 -- dd/common.sh@98 -- # xtrace_disable 00:14:12.652 14:33:21 -- common/autotest_common.sh@10 -- # set +x 00:14:12.652 14:33:21 -- dd/uring.sh@41 -- # magic=f4sp4fv3s85qpjb97f7bvu3ln4q2h1upcj54sfxx1cqt64llkaigs5baj5wkoe0rx930as0036zkqut50z7b4a76x20d0sam09sbo5lcg0sftecjke3vne32i3w3wd78j3xqdiq9g35de508mqadb178bi2i4rm1bgz3ipj7oit8xgxc798x0kqy83mw6z3qra7yvi95f7v5dslfuqyreme01hu97ywjilb3ha7q13ob0cvm8zn4xcz71chnp1z9k4vfpew60kiwxqtf23c9cortdta9jnqeb6piwo318obgu6g1gqxv08yxl3q06l3gde8idch5hz9xeeue66k1wnas7xghp32bi5q00yyii9sk4711p49e8jfo0x72v7ceqyn7o4i2i0bvu3zwvp0wpi0wru7xp235qnuagdrv48d9l4opgkq5hgvrilaodb3kg5ky6a2fmefrz3yks2g2hxevrva3eapnericw39ycvtgpfazhs6k27u9vje08pr62fr1kp879jhlc6bdowrfv0a4wsvx2yrb59n0320m8p74p71fzvrtjz5gxmyvrd0hm6sgcmec9q87x536lvch6ov2gqo4exb8jlequsv24fwac31zrdqu0pmsv2gt5utsrm94wvwe8kp42qjdwreyjpngbzild7r0y5mslzxjdzzy30her4d41aujz5r9q4bdx0pp03074od70i14xrvt0b95nlbiz1zfnhkgi4jn4qbcd5n4ia5yitqdsz7zyj00pp23nokz8unztl45yexfs8hinksbqu8eu5zni58qkk6sdjtdcgidgzgk9kqqc6utm9psz0sz66f5hdj4qcdgpgmukj0jahasetjckkdtaegyqssgwomkmvblyeydei0ryj7e5tibc15n9wm21x6d0xmo3rwy1pkaen7d1spq4wpw5pkjkq247kfhh9e0lmh7lmk3fdxvuw17iwmzbsgl6han6cfb71k69tex56tghlo1ewuaud4aeh0e0npp9n83 00:14:12.652 14:33:21 -- dd/uring.sh@42 -- # echo f4sp4fv3s85qpjb97f7bvu3ln4q2h1upcj54sfxx1cqt64llkaigs5baj5wkoe0rx930as0036zkqut50z7b4a76x20d0sam09sbo5lcg0sftecjke3vne32i3w3wd78j3xqdiq9g35de508mqadb178bi2i4rm1bgz3ipj7oit8xgxc798x0kqy83mw6z3qra7yvi95f7v5dslfuqyreme01hu97ywjilb3ha7q13ob0cvm8zn4xcz71chnp1z9k4vfpew60kiwxqtf23c9cortdta9jnqeb6piwo318obgu6g1gqxv08yxl3q06l3gde8idch5hz9xeeue66k1wnas7xghp32bi5q00yyii9sk4711p49e8jfo0x72v7ceqyn7o4i2i0bvu3zwvp0wpi0wru7xp235qnuagdrv48d9l4opgkq5hgvrilaodb3kg5ky6a2fmefrz3yks2g2hxevrva3eapnericw39ycvtgpfazhs6k27u9vje08pr62fr1kp879jhlc6bdowrfv0a4wsvx2yrb59n0320m8p74p71fzvrtjz5gxmyvrd0hm6sgcmec9q87x536lvch6ov2gqo4exb8jlequsv24fwac31zrdqu0pmsv2gt5utsrm94wvwe8kp42qjdwreyjpngbzild7r0y5mslzxjdzzy30her4d41aujz5r9q4bdx0pp03074od70i14xrvt0b95nlbiz1zfnhkgi4jn4qbcd5n4ia5yitqdsz7zyj00pp23nokz8unztl45yexfs8hinksbqu8eu5zni58qkk6sdjtdcgidgzgk9kqqc6utm9psz0sz66f5hdj4qcdgpgmukj0jahasetjckkdtaegyqssgwomkmvblyeydei0ryj7e5tibc15n9wm21x6d0xmo3rwy1pkaen7d1spq4wpw5pkjkq247kfhh9e0lmh7lmk3fdxvuw17iwmzbsgl6han6cfb71k69tex56tghlo1ewuaud4aeh0e0npp9n83 00:14:12.652 14:33:21 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:14:12.652 [2024-04-17 14:33:21.211292] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:12.652 [2024-04-17 14:33:21.211372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63683 ] 00:14:12.911 [2024-04-17 14:33:21.344577] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.911 [2024-04-17 14:33:21.403847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.738  Copying: 511/511 [MB] (average 1336 MBps) 00:14:13.738 00:14:13.738 14:33:22 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:14:13.738 14:33:22 -- dd/uring.sh@54 -- # gen_conf 00:14:13.738 14:33:22 -- dd/common.sh@31 -- # xtrace_disable 00:14:13.738 14:33:22 -- common/autotest_common.sh@10 -- # set +x 00:14:13.738 [2024-04-17 14:33:22.307417] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:13.738 [2024-04-17 14:33:22.307518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63699 ] 00:14:13.738 { 00:14:13.738 "subsystems": [ 00:14:13.738 { 00:14:13.738 "subsystem": "bdev", 00:14:13.738 "config": [ 00:14:13.738 { 00:14:13.738 "params": { 00:14:13.738 "block_size": 512, 00:14:13.738 "num_blocks": 1048576, 00:14:13.738 "name": "malloc0" 00:14:13.738 }, 00:14:13.738 "method": "bdev_malloc_create" 00:14:13.738 }, 00:14:13.738 { 00:14:13.738 "params": { 00:14:13.738 "filename": "/dev/zram1", 00:14:13.738 "name": "uring0" 00:14:13.739 }, 00:14:13.739 "method": "bdev_uring_create" 00:14:13.739 }, 00:14:13.739 { 00:14:13.739 "method": "bdev_wait_for_examine" 00:14:13.739 } 00:14:13.739 ] 00:14:13.739 } 00:14:13.739 ] 00:14:13.739 } 00:14:14.004 [2024-04-17 14:33:22.446210] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.004 [2024-04-17 14:33:22.520715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.540  Copying: 160/512 [MB] (160 MBps) Copying: 345/512 [MB] (185 MBps) Copying: 511/512 [MB] (166 MBps) Copying: 512/512 [MB] (average 170 MBps) 00:14:17.540 00:14:17.540 14:33:25 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:14:17.540 14:33:25 -- dd/uring.sh@60 -- # gen_conf 00:14:17.540 14:33:25 -- dd/common.sh@31 -- # xtrace_disable 00:14:17.540 14:33:25 -- common/autotest_common.sh@10 -- # set +x 00:14:17.540 [2024-04-17 14:33:26.039140] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:17.540 [2024-04-17 14:33:26.039260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63754 ] 00:14:17.540 { 00:14:17.540 "subsystems": [ 00:14:17.540 { 00:14:17.540 "subsystem": "bdev", 00:14:17.540 "config": [ 00:14:17.540 { 00:14:17.540 "params": { 00:14:17.540 "block_size": 512, 00:14:17.540 "num_blocks": 1048576, 00:14:17.540 "name": "malloc0" 00:14:17.540 }, 00:14:17.540 "method": "bdev_malloc_create" 00:14:17.540 }, 00:14:17.540 { 00:14:17.540 "params": { 00:14:17.540 "filename": "/dev/zram1", 00:14:17.540 "name": "uring0" 00:14:17.540 }, 00:14:17.540 "method": "bdev_uring_create" 00:14:17.540 }, 00:14:17.540 { 00:14:17.540 "method": "bdev_wait_for_examine" 00:14:17.540 } 00:14:17.540 ] 00:14:17.540 } 00:14:17.540 ] 00:14:17.540 } 00:14:17.798 [2024-04-17 14:33:26.172466] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.798 [2024-04-17 14:33:26.256150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.959  Copying: 152/512 [MB] (152 MBps) Copying: 278/512 [MB] (126 MBps) Copying: 429/512 [MB] (150 MBps) Copying: 512/512 [MB] (average 138 MBps) 00:14:21.959 00:14:21.959 14:33:30 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:14:21.959 14:33:30 -- dd/uring.sh@66 -- # [[ f4sp4fv3s85qpjb97f7bvu3ln4q2h1upcj54sfxx1cqt64llkaigs5baj5wkoe0rx930as0036zkqut50z7b4a76x20d0sam09sbo5lcg0sftecjke3vne32i3w3wd78j3xqdiq9g35de508mqadb178bi2i4rm1bgz3ipj7oit8xgxc798x0kqy83mw6z3qra7yvi95f7v5dslfuqyreme01hu97ywjilb3ha7q13ob0cvm8zn4xcz71chnp1z9k4vfpew60kiwxqtf23c9cortdta9jnqeb6piwo318obgu6g1gqxv08yxl3q06l3gde8idch5hz9xeeue66k1wnas7xghp32bi5q00yyii9sk4711p49e8jfo0x72v7ceqyn7o4i2i0bvu3zwvp0wpi0wru7xp235qnuagdrv48d9l4opgkq5hgvrilaodb3kg5ky6a2fmefrz3yks2g2hxevrva3eapnericw39ycvtgpfazhs6k27u9vje08pr62fr1kp879jhlc6bdowrfv0a4wsvx2yrb59n0320m8p74p71fzvrtjz5gxmyvrd0hm6sgcmec9q87x536lvch6ov2gqo4exb8jlequsv24fwac31zrdqu0pmsv2gt5utsrm94wvwe8kp42qjdwreyjpngbzild7r0y5mslzxjdzzy30her4d41aujz5r9q4bdx0pp03074od70i14xrvt0b95nlbiz1zfnhkgi4jn4qbcd5n4ia5yitqdsz7zyj00pp23nokz8unztl45yexfs8hinksbqu8eu5zni58qkk6sdjtdcgidgzgk9kqqc6utm9psz0sz66f5hdj4qcdgpgmukj0jahasetjckkdtaegyqssgwomkmvblyeydei0ryj7e5tibc15n9wm21x6d0xmo3rwy1pkaen7d1spq4wpw5pkjkq247kfhh9e0lmh7lmk3fdxvuw17iwmzbsgl6han6cfb71k69tex56tghlo1ewuaud4aeh0e0npp9n83 == \f\4\s\p\4\f\v\3\s\8\5\q\p\j\b\9\7\f\7\b\v\u\3\l\n\4\q\2\h\1\u\p\c\j\5\4\s\f\x\x\1\c\q\t\6\4\l\l\k\a\i\g\s\5\b\a\j\5\w\k\o\e\0\r\x\9\3\0\a\s\0\0\3\6\z\k\q\u\t\5\0\z\7\b\4\a\7\6\x\2\0\d\0\s\a\m\0\9\s\b\o\5\l\c\g\0\s\f\t\e\c\j\k\e\3\v\n\e\3\2\i\3\w\3\w\d\7\8\j\3\x\q\d\i\q\9\g\3\5\d\e\5\0\8\m\q\a\d\b\1\7\8\b\i\2\i\4\r\m\1\b\g\z\3\i\p\j\7\o\i\t\8\x\g\x\c\7\9\8\x\0\k\q\y\8\3\m\w\6\z\3\q\r\a\7\y\v\i\9\5\f\7\v\5\d\s\l\f\u\q\y\r\e\m\e\0\1\h\u\9\7\y\w\j\i\l\b\3\h\a\7\q\1\3\o\b\0\c\v\m\8\z\n\4\x\c\z\7\1\c\h\n\p\1\z\9\k\4\v\f\p\e\w\6\0\k\i\w\x\q\t\f\2\3\c\9\c\o\r\t\d\t\a\9\j\n\q\e\b\6\p\i\w\o\3\1\8\o\b\g\u\6\g\1\g\q\x\v\0\8\y\x\l\3\q\0\6\l\3\g\d\e\8\i\d\c\h\5\h\z\9\x\e\e\u\e\6\6\k\1\w\n\a\s\7\x\g\h\p\3\2\b\i\5\q\0\0\y\y\i\i\9\s\k\4\7\1\1\p\4\9\e\8\j\f\o\0\x\7\2\v\7\c\e\q\y\n\7\o\4\i\2\i\0\b\v\u\3\z\w\v\p\0\w\p\i\0\w\r\u\7\x\p\2\3\5\q\n\u\a\g\d\r\v\4\8\d\9\l\4\o\p\g\k\q\5\h\g\v\r\i\l\a\o\d\b\3\k\g\5\k\y\6\a\2\f\m\e\f\r\z\3\y\k\s\2\g\2\h\x\e\v\r\v\a\3\e\a\p\n\e\r\i\c\w\3\9\y\c\v\t\g\p\f\a\z\h\s\6\k\2\7\u\9\v\j\e\0\8\p\r\6\2\f\r\1\k\p\8\7\9\j\h\l\c\6\b\d\o\w\r\f\v\0\a\4\w\s\v\x\2\y\r\b\5\9\n\0\3\2\0\m\8\p\7\4\p\7\1\f\z\v\r\t\j\z\5\g\x\m\y\v\r\d\0\h\m\6\s\g\c\m\e\c\9\q\8\7\x\5\3\6\l\v\c\h\6\o\v\2\g\q\o\4\e\x\b\8\j\l\e\q\u\s\v\2\4\f\w\a\c\3\1\z\r\d\q\u\0\p\m\s\v\2\g\t\5\u\t\s\r\m\9\4\w\v\w\e\8\k\p\4\2\q\j\d\w\r\e\y\j\p\n\g\b\z\i\l\d\7\r\0\y\5\m\s\l\z\x\j\d\z\z\y\3\0\h\e\r\4\d\4\1\a\u\j\z\5\r\9\q\4\b\d\x\0\p\p\0\3\0\7\4\o\d\7\0\i\1\4\x\r\v\t\0\b\9\5\n\l\b\i\z\1\z\f\n\h\k\g\i\4\j\n\4\q\b\c\d\5\n\4\i\a\5\y\i\t\q\d\s\z\7\z\y\j\0\0\p\p\2\3\n\o\k\z\8\u\n\z\t\l\4\5\y\e\x\f\s\8\h\i\n\k\s\b\q\u\8\e\u\5\z\n\i\5\8\q\k\k\6\s\d\j\t\d\c\g\i\d\g\z\g\k\9\k\q\q\c\6\u\t\m\9\p\s\z\0\s\z\6\6\f\5\h\d\j\4\q\c\d\g\p\g\m\u\k\j\0\j\a\h\a\s\e\t\j\c\k\k\d\t\a\e\g\y\q\s\s\g\w\o\m\k\m\v\b\l\y\e\y\d\e\i\0\r\y\j\7\e\5\t\i\b\c\1\5\n\9\w\m\2\1\x\6\d\0\x\m\o\3\r\w\y\1\p\k\a\e\n\7\d\1\s\p\q\4\w\p\w\5\p\k\j\k\q\2\4\7\k\f\h\h\9\e\0\l\m\h\7\l\m\k\3\f\d\x\v\u\w\1\7\i\w\m\z\b\s\g\l\6\h\a\n\6\c\f\b\7\1\k\6\9\t\e\x\5\6\t\g\h\l\o\1\e\w\u\a\u\d\4\a\e\h\0\e\0\n\p\p\9\n\8\3 ]] 00:14:21.959 14:33:30 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:14:21.959 14:33:30 -- dd/uring.sh@69 -- # [[ f4sp4fv3s85qpjb97f7bvu3ln4q2h1upcj54sfxx1cqt64llkaigs5baj5wkoe0rx930as0036zkqut50z7b4a76x20d0sam09sbo5lcg0sftecjke3vne32i3w3wd78j3xqdiq9g35de508mqadb178bi2i4rm1bgz3ipj7oit8xgxc798x0kqy83mw6z3qra7yvi95f7v5dslfuqyreme01hu97ywjilb3ha7q13ob0cvm8zn4xcz71chnp1z9k4vfpew60kiwxqtf23c9cortdta9jnqeb6piwo318obgu6g1gqxv08yxl3q06l3gde8idch5hz9xeeue66k1wnas7xghp32bi5q00yyii9sk4711p49e8jfo0x72v7ceqyn7o4i2i0bvu3zwvp0wpi0wru7xp235qnuagdrv48d9l4opgkq5hgvrilaodb3kg5ky6a2fmefrz3yks2g2hxevrva3eapnericw39ycvtgpfazhs6k27u9vje08pr62fr1kp879jhlc6bdowrfv0a4wsvx2yrb59n0320m8p74p71fzvrtjz5gxmyvrd0hm6sgcmec9q87x536lvch6ov2gqo4exb8jlequsv24fwac31zrdqu0pmsv2gt5utsrm94wvwe8kp42qjdwreyjpngbzild7r0y5mslzxjdzzy30her4d41aujz5r9q4bdx0pp03074od70i14xrvt0b95nlbiz1zfnhkgi4jn4qbcd5n4ia5yitqdsz7zyj00pp23nokz8unztl45yexfs8hinksbqu8eu5zni58qkk6sdjtdcgidgzgk9kqqc6utm9psz0sz66f5hdj4qcdgpgmukj0jahasetjckkdtaegyqssgwomkmvblyeydei0ryj7e5tibc15n9wm21x6d0xmo3rwy1pkaen7d1spq4wpw5pkjkq247kfhh9e0lmh7lmk3fdxvuw17iwmzbsgl6han6cfb71k69tex56tghlo1ewuaud4aeh0e0npp9n83 == \f\4\s\p\4\f\v\3\s\8\5\q\p\j\b\9\7\f\7\b\v\u\3\l\n\4\q\2\h\1\u\p\c\j\5\4\s\f\x\x\1\c\q\t\6\4\l\l\k\a\i\g\s\5\b\a\j\5\w\k\o\e\0\r\x\9\3\0\a\s\0\0\3\6\z\k\q\u\t\5\0\z\7\b\4\a\7\6\x\2\0\d\0\s\a\m\0\9\s\b\o\5\l\c\g\0\s\f\t\e\c\j\k\e\3\v\n\e\3\2\i\3\w\3\w\d\7\8\j\3\x\q\d\i\q\9\g\3\5\d\e\5\0\8\m\q\a\d\b\1\7\8\b\i\2\i\4\r\m\1\b\g\z\3\i\p\j\7\o\i\t\8\x\g\x\c\7\9\8\x\0\k\q\y\8\3\m\w\6\z\3\q\r\a\7\y\v\i\9\5\f\7\v\5\d\s\l\f\u\q\y\r\e\m\e\0\1\h\u\9\7\y\w\j\i\l\b\3\h\a\7\q\1\3\o\b\0\c\v\m\8\z\n\4\x\c\z\7\1\c\h\n\p\1\z\9\k\4\v\f\p\e\w\6\0\k\i\w\x\q\t\f\2\3\c\9\c\o\r\t\d\t\a\9\j\n\q\e\b\6\p\i\w\o\3\1\8\o\b\g\u\6\g\1\g\q\x\v\0\8\y\x\l\3\q\0\6\l\3\g\d\e\8\i\d\c\h\5\h\z\9\x\e\e\u\e\6\6\k\1\w\n\a\s\7\x\g\h\p\3\2\b\i\5\q\0\0\y\y\i\i\9\s\k\4\7\1\1\p\4\9\e\8\j\f\o\0\x\7\2\v\7\c\e\q\y\n\7\o\4\i\2\i\0\b\v\u\3\z\w\v\p\0\w\p\i\0\w\r\u\7\x\p\2\3\5\q\n\u\a\g\d\r\v\4\8\d\9\l\4\o\p\g\k\q\5\h\g\v\r\i\l\a\o\d\b\3\k\g\5\k\y\6\a\2\f\m\e\f\r\z\3\y\k\s\2\g\2\h\x\e\v\r\v\a\3\e\a\p\n\e\r\i\c\w\3\9\y\c\v\t\g\p\f\a\z\h\s\6\k\2\7\u\9\v\j\e\0\8\p\r\6\2\f\r\1\k\p\8\7\9\j\h\l\c\6\b\d\o\w\r\f\v\0\a\4\w\s\v\x\2\y\r\b\5\9\n\0\3\2\0\m\8\p\7\4\p\7\1\f\z\v\r\t\j\z\5\g\x\m\y\v\r\d\0\h\m\6\s\g\c\m\e\c\9\q\8\7\x\5\3\6\l\v\c\h\6\o\v\2\g\q\o\4\e\x\b\8\j\l\e\q\u\s\v\2\4\f\w\a\c\3\1\z\r\d\q\u\0\p\m\s\v\2\g\t\5\u\t\s\r\m\9\4\w\v\w\e\8\k\p\4\2\q\j\d\w\r\e\y\j\p\n\g\b\z\i\l\d\7\r\0\y\5\m\s\l\z\x\j\d\z\z\y\3\0\h\e\r\4\d\4\1\a\u\j\z\5\r\9\q\4\b\d\x\0\p\p\0\3\0\7\4\o\d\7\0\i\1\4\x\r\v\t\0\b\9\5\n\l\b\i\z\1\z\f\n\h\k\g\i\4\j\n\4\q\b\c\d\5\n\4\i\a\5\y\i\t\q\d\s\z\7\z\y\j\0\0\p\p\2\3\n\o\k\z\8\u\n\z\t\l\4\5\y\e\x\f\s\8\h\i\n\k\s\b\q\u\8\e\u\5\z\n\i\5\8\q\k\k\6\s\d\j\t\d\c\g\i\d\g\z\g\k\9\k\q\q\c\6\u\t\m\9\p\s\z\0\s\z\6\6\f\5\h\d\j\4\q\c\d\g\p\g\m\u\k\j\0\j\a\h\a\s\e\t\j\c\k\k\d\t\a\e\g\y\q\s\s\g\w\o\m\k\m\v\b\l\y\e\y\d\e\i\0\r\y\j\7\e\5\t\i\b\c\1\5\n\9\w\m\2\1\x\6\d\0\x\m\o\3\r\w\y\1\p\k\a\e\n\7\d\1\s\p\q\4\w\p\w\5\p\k\j\k\q\2\4\7\k\f\h\h\9\e\0\l\m\h\7\l\m\k\3\f\d\x\v\u\w\1\7\i\w\m\z\b\s\g\l\6\h\a\n\6\c\f\b\7\1\k\6\9\t\e\x\5\6\t\g\h\l\o\1\e\w\u\a\u\d\4\a\e\h\0\e\0\n\p\p\9\n\8\3 ]] 00:14:21.959 14:33:30 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:22.525 14:33:30 -- dd/uring.sh@75 -- # gen_conf 00:14:22.525 14:33:30 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:14:22.525 14:33:30 -- dd/common.sh@31 -- # xtrace_disable 00:14:22.525 14:33:30 -- common/autotest_common.sh@10 -- # set +x 00:14:22.525 { 00:14:22.525 "subsystems": [ 00:14:22.525 { 00:14:22.525 "subsystem": "bdev", 00:14:22.525 "config": [ 00:14:22.525 { 00:14:22.525 "params": { 00:14:22.525 "block_size": 512, 00:14:22.525 "num_blocks": 1048576, 00:14:22.525 "name": "malloc0" 00:14:22.525 }, 00:14:22.525 "method": "bdev_malloc_create" 00:14:22.525 }, 00:14:22.525 { 00:14:22.525 "params": { 00:14:22.525 "filename": "/dev/zram1", 00:14:22.525 "name": "uring0" 00:14:22.525 }, 00:14:22.525 "method": "bdev_uring_create" 00:14:22.525 }, 00:14:22.525 { 00:14:22.525 "method": "bdev_wait_for_examine" 00:14:22.525 } 00:14:22.525 ] 00:14:22.525 } 00:14:22.525 ] 00:14:22.525 } 00:14:22.525 [2024-04-17 14:33:30.887128] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:22.525 [2024-04-17 14:33:30.887212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63842 ] 00:14:22.525 [2024-04-17 14:33:31.036549] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.525 [2024-04-17 14:33:31.098046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.649  Copying: 150/512 [MB] (150 MBps) Copying: 297/512 [MB] (146 MBps) Copying: 435/512 [MB] (138 MBps) Copying: 512/512 [MB] (average 142 MBps) 00:14:26.649 00:14:26.649 14:33:35 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:14:26.649 14:33:35 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:14:26.649 14:33:35 -- dd/uring.sh@87 -- # : 00:14:26.649 14:33:35 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:14:26.649 14:33:35 -- dd/uring.sh@87 -- # : 00:14:26.649 14:33:35 -- dd/uring.sh@87 -- # gen_conf 00:14:26.649 14:33:35 -- dd/common.sh@31 -- # xtrace_disable 00:14:26.649 14:33:35 -- common/autotest_common.sh@10 -- # set +x 00:14:26.649 { 00:14:26.649 "subsystems": [ 00:14:26.649 { 00:14:26.649 "subsystem": "bdev", 00:14:26.649 "config": [ 00:14:26.649 { 00:14:26.649 "params": { 00:14:26.649 "block_size": 512, 00:14:26.649 "num_blocks": 1048576, 00:14:26.649 "name": "malloc0" 00:14:26.649 }, 00:14:26.649 "method": "bdev_malloc_create" 00:14:26.649 }, 00:14:26.649 { 00:14:26.649 "params": { 00:14:26.649 "filename": "/dev/zram1", 00:14:26.649 "name": "uring0" 00:14:26.649 }, 00:14:26.649 "method": "bdev_uring_create" 00:14:26.649 }, 00:14:26.649 { 00:14:26.649 "params": { 00:14:26.649 "name": "uring0" 00:14:26.649 }, 00:14:26.649 "method": "bdev_uring_delete" 00:14:26.649 }, 00:14:26.649 { 00:14:26.649 "method": "bdev_wait_for_examine" 00:14:26.649 } 00:14:26.649 ] 00:14:26.649 } 00:14:26.649 ] 00:14:26.649 } 00:14:26.649 [2024-04-17 14:33:35.171175] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:26.649 [2024-04-17 14:33:35.171285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63903 ] 00:14:26.907 [2024-04-17 14:33:35.309946] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.907 [2024-04-17 14:33:35.370100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.423  Copying: 0/0 [B] (average 0 Bps) 00:14:27.423 00:14:27.423 14:33:35 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:27.423 14:33:35 -- dd/uring.sh@94 -- # gen_conf 00:14:27.423 14:33:35 -- dd/uring.sh@94 -- # : 00:14:27.423 14:33:35 -- dd/common.sh@31 -- # xtrace_disable 00:14:27.423 14:33:35 -- common/autotest_common.sh@638 -- # local es=0 00:14:27.423 14:33:35 -- common/autotest_common.sh@10 -- # set +x 00:14:27.423 14:33:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:27.423 14:33:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:27.423 14:33:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:27.423 14:33:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:27.423 14:33:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:27.423 14:33:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:27.423 14:33:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:27.423 14:33:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:27.423 14:33:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:27.423 14:33:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:27.423 [2024-04-17 14:33:35.871914] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:27.423 [2024-04-17 14:33:35.872022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63933 ] 00:14:27.423 { 00:14:27.423 "subsystems": [ 00:14:27.423 { 00:14:27.423 "subsystem": "bdev", 00:14:27.423 "config": [ 00:14:27.423 { 00:14:27.423 "params": { 00:14:27.423 "block_size": 512, 00:14:27.423 "num_blocks": 1048576, 00:14:27.423 "name": "malloc0" 00:14:27.423 }, 00:14:27.423 "method": "bdev_malloc_create" 00:14:27.423 }, 00:14:27.423 { 00:14:27.423 "params": { 00:14:27.423 "filename": "/dev/zram1", 00:14:27.423 "name": "uring0" 00:14:27.423 }, 00:14:27.423 "method": "bdev_uring_create" 00:14:27.423 }, 00:14:27.423 { 00:14:27.423 "params": { 00:14:27.423 "name": "uring0" 00:14:27.423 }, 00:14:27.423 "method": "bdev_uring_delete" 00:14:27.423 }, 00:14:27.423 { 00:14:27.423 "method": "bdev_wait_for_examine" 00:14:27.423 } 00:14:27.423 ] 00:14:27.423 } 00:14:27.423 ] 00:14:27.423 } 00:14:27.682 [2024-04-17 14:33:36.027810] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.682 [2024-04-17 14:33:36.109932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.941 [2024-04-17 14:33:36.294148] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:14:27.941 [2024-04-17 14:33:36.294215] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:14:27.941 [2024-04-17 14:33:36.294227] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:14:27.941 [2024-04-17 14:33:36.294239] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:27.941 [2024-04-17 14:33:36.474575] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:14:28.199 14:33:36 -- common/autotest_common.sh@641 -- # es=237 00:14:28.199 14:33:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:28.199 14:33:36 -- common/autotest_common.sh@650 -- # es=109 00:14:28.199 14:33:36 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:28.199 14:33:36 -- common/autotest_common.sh@658 -- # es=1 00:14:28.199 14:33:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:28.199 14:33:36 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:14:28.199 14:33:36 -- dd/common.sh@172 -- # local id=1 00:14:28.199 14:33:36 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:14:28.199 14:33:36 -- dd/common.sh@176 -- # echo 1 00:14:28.199 14:33:36 -- dd/common.sh@177 -- # echo 1 00:14:28.199 14:33:36 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:28.458 00:14:28.458 real 0m15.668s 00:14:28.458 user 0m10.725s 00:14:28.458 sys 0m13.877s 00:14:28.458 14:33:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:28.458 ************************************ 00:14:28.458 14:33:36 -- common/autotest_common.sh@10 -- # set +x 00:14:28.458 END TEST dd_uring_copy 00:14:28.458 ************************************ 00:14:28.458 ************************************ 00:14:28.458 END TEST spdk_dd_uring 00:14:28.458 ************************************ 00:14:28.458 00:14:28.458 real 0m15.853s 00:14:28.458 user 0m10.794s 00:14:28.458 sys 0m13.984s 00:14:28.458 14:33:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:28.458 14:33:36 -- common/autotest_common.sh@10 -- # set +x 00:14:28.458 14:33:36 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:14:28.458 14:33:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:28.458 14:33:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.458 14:33:36 -- common/autotest_common.sh@10 -- # set +x 00:14:28.458 ************************************ 00:14:28.458 START TEST spdk_dd_sparse 00:14:28.458 ************************************ 00:14:28.458 14:33:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:14:28.458 * Looking for test storage... 00:14:28.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:28.458 14:33:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.458 14:33:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.458 14:33:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.458 14:33:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.458 14:33:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.458 14:33:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.458 14:33:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.458 14:33:37 -- paths/export.sh@5 -- # export PATH 00:14:28.458 14:33:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.458 14:33:37 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:14:28.458 14:33:37 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:14:28.458 14:33:37 -- dd/sparse.sh@110 -- # file1=file_zero1 00:14:28.458 14:33:37 -- dd/sparse.sh@111 -- # file2=file_zero2 00:14:28.458 14:33:37 -- dd/sparse.sh@112 -- # file3=file_zero3 00:14:28.458 14:33:37 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:14:28.458 14:33:37 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:14:28.458 14:33:37 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:14:28.458 14:33:37 -- dd/sparse.sh@118 -- # prepare 00:14:28.458 14:33:37 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:14:28.458 14:33:37 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:14:28.458 1+0 records in 00:14:28.458 1+0 records out 00:14:28.458 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00573402 s, 731 MB/s 00:14:28.458 14:33:37 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:14:28.458 1+0 records in 00:14:28.458 1+0 records out 00:14:28.458 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00499744 s, 839 MB/s 00:14:28.458 14:33:37 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:14:28.458 1+0 records in 00:14:28.458 1+0 records out 00:14:28.458 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00374743 s, 1.1 GB/s 00:14:28.458 14:33:37 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:14:28.458 14:33:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:28.458 14:33:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.458 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:14:28.716 ************************************ 00:14:28.716 START TEST dd_sparse_file_to_file 00:14:28.716 ************************************ 00:14:28.716 14:33:37 -- common/autotest_common.sh@1111 -- # file_to_file 00:14:28.716 14:33:37 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:14:28.716 14:33:37 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:14:28.716 14:33:37 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:28.716 14:33:37 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:14:28.716 14:33:37 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:14:28.716 14:33:37 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:14:28.716 14:33:37 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:14:28.716 14:33:37 -- dd/sparse.sh@41 -- # gen_conf 00:14:28.716 14:33:37 -- dd/common.sh@31 -- # xtrace_disable 00:14:28.716 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:14:28.716 [2024-04-17 14:33:37.142994] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:28.716 [2024-04-17 14:33:37.143080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64027 ] 00:14:28.716 { 00:14:28.716 "subsystems": [ 00:14:28.716 { 00:14:28.716 "subsystem": "bdev", 00:14:28.716 "config": [ 00:14:28.716 { 00:14:28.716 "params": { 00:14:28.716 "block_size": 4096, 00:14:28.716 "filename": "dd_sparse_aio_disk", 00:14:28.716 "name": "dd_aio" 00:14:28.716 }, 00:14:28.716 "method": "bdev_aio_create" 00:14:28.716 }, 00:14:28.716 { 00:14:28.716 "params": { 00:14:28.716 "lvs_name": "dd_lvstore", 00:14:28.716 "bdev_name": "dd_aio" 00:14:28.716 }, 00:14:28.716 "method": "bdev_lvol_create_lvstore" 00:14:28.716 }, 00:14:28.716 { 00:14:28.716 "method": "bdev_wait_for_examine" 00:14:28.716 } 00:14:28.716 ] 00:14:28.716 } 00:14:28.716 ] 00:14:28.716 } 00:14:28.716 [2024-04-17 14:33:37.277592] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.974 [2024-04-17 14:33:37.360533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.233  Copying: 12/36 [MB] (average 1200 MBps) 00:14:29.233 00:14:29.233 14:33:37 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:14:29.233 14:33:37 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:14:29.233 14:33:37 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:14:29.233 14:33:37 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:14:29.233 14:33:37 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:14:29.233 14:33:37 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:14:29.233 14:33:37 -- dd/sparse.sh@52 -- # stat1_b=24576 00:14:29.233 14:33:37 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:14:29.234 14:33:37 -- dd/sparse.sh@53 -- # stat2_b=24576 00:14:29.234 14:33:37 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:14:29.234 00:14:29.234 real 0m0.619s 00:14:29.234 user 0m0.404s 00:14:29.234 sys 0m0.257s 00:14:29.234 14:33:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:29.234 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:14:29.234 ************************************ 00:14:29.234 END TEST dd_sparse_file_to_file 00:14:29.234 ************************************ 00:14:29.234 14:33:37 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:14:29.234 14:33:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:29.234 14:33:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.234 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:14:29.234 ************************************ 00:14:29.234 START TEST dd_sparse_file_to_bdev 00:14:29.234 ************************************ 00:14:29.234 14:33:37 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:14:29.234 14:33:37 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:29.234 14:33:37 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:14:29.234 14:33:37 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:14:29.234 14:33:37 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:14:29.234 14:33:37 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:14:29.234 14:33:37 -- dd/sparse.sh@73 -- # gen_conf 00:14:29.234 14:33:37 -- dd/common.sh@31 -- # xtrace_disable 00:14:29.234 14:33:37 -- common/autotest_common.sh@10 -- # set +x 00:14:29.492 [2024-04-17 14:33:37.857811] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:29.492 [2024-04-17 14:33:37.857918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64080 ] 00:14:29.492 { 00:14:29.492 "subsystems": [ 00:14:29.492 { 00:14:29.492 "subsystem": "bdev", 00:14:29.492 "config": [ 00:14:29.492 { 00:14:29.492 "params": { 00:14:29.492 "block_size": 4096, 00:14:29.492 "filename": "dd_sparse_aio_disk", 00:14:29.492 "name": "dd_aio" 00:14:29.492 }, 00:14:29.492 "method": "bdev_aio_create" 00:14:29.492 }, 00:14:29.492 { 00:14:29.492 "params": { 00:14:29.492 "lvs_name": "dd_lvstore", 00:14:29.492 "lvol_name": "dd_lvol", 00:14:29.492 "size": 37748736, 00:14:29.492 "thin_provision": true 00:14:29.492 }, 00:14:29.492 "method": "bdev_lvol_create" 00:14:29.492 }, 00:14:29.492 { 00:14:29.492 "method": "bdev_wait_for_examine" 00:14:29.492 } 00:14:29.492 ] 00:14:29.492 } 00:14:29.492 ] 00:14:29.492 } 00:14:29.492 [2024-04-17 14:33:37.991191] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.492 [2024-04-17 14:33:38.050978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.750 [2024-04-17 14:33:38.114824] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:14:29.750  Copying: 12/36 [MB] (average 800 MBps)[2024-04-17 14:33:38.146495] app.c: 930:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:14:30.007 00:14:30.007 00:14:30.007 00:14:30.007 real 0m0.546s 00:14:30.007 user 0m0.379s 00:14:30.007 sys 0m0.228s 00:14:30.007 14:33:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.007 14:33:38 -- common/autotest_common.sh@10 -- # set +x 00:14:30.007 ************************************ 00:14:30.007 END TEST dd_sparse_file_to_bdev 00:14:30.007 ************************************ 00:14:30.007 14:33:38 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:14:30.007 14:33:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.007 14:33:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.007 14:33:38 -- common/autotest_common.sh@10 -- # set +x 00:14:30.007 ************************************ 00:14:30.007 START TEST dd_sparse_bdev_to_file 00:14:30.007 ************************************ 00:14:30.007 14:33:38 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:14:30.007 14:33:38 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:14:30.007 14:33:38 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:14:30.007 14:33:38 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:30.007 14:33:38 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:14:30.007 14:33:38 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:14:30.008 14:33:38 -- dd/sparse.sh@91 -- # gen_conf 00:14:30.008 14:33:38 -- dd/common.sh@31 -- # xtrace_disable 00:14:30.008 14:33:38 -- common/autotest_common.sh@10 -- # set +x 00:14:30.008 { 00:14:30.008 "subsystems": [ 00:14:30.008 { 00:14:30.008 "subsystem": "bdev", 00:14:30.008 "config": [ 00:14:30.008 { 00:14:30.008 "params": { 00:14:30.008 "block_size": 4096, 00:14:30.008 "filename": "dd_sparse_aio_disk", 00:14:30.008 "name": "dd_aio" 00:14:30.008 }, 00:14:30.008 "method": "bdev_aio_create" 00:14:30.008 }, 00:14:30.008 { 00:14:30.008 "method": "bdev_wait_for_examine" 00:14:30.008 } 00:14:30.008 ] 00:14:30.008 } 00:14:30.008 ] 00:14:30.008 } 00:14:30.008 [2024-04-17 14:33:38.505482] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:30.008 [2024-04-17 14:33:38.505584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64116 ] 00:14:30.266 [2024-04-17 14:33:38.643233] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.266 [2024-04-17 14:33:38.701613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.527  Copying: 12/36 [MB] (average 1200 MBps) 00:14:30.527 00:14:30.527 14:33:39 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:14:30.527 14:33:39 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:14:30.527 14:33:39 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:14:30.527 14:33:39 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:14:30.527 14:33:39 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:14:30.527 14:33:39 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:14:30.527 14:33:39 -- dd/sparse.sh@102 -- # stat2_b=24576 00:14:30.527 14:33:39 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:14:30.527 14:33:39 -- dd/sparse.sh@103 -- # stat3_b=24576 00:14:30.527 14:33:39 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:14:30.527 00:14:30.527 real 0m0.620s 00:14:30.527 user 0m0.431s 00:14:30.527 sys 0m0.251s 00:14:30.527 14:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.527 ************************************ 00:14:30.527 END TEST dd_sparse_bdev_to_file 00:14:30.527 ************************************ 00:14:30.527 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:30.527 14:33:39 -- dd/sparse.sh@1 -- # cleanup 00:14:30.527 14:33:39 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:14:30.527 14:33:39 -- dd/sparse.sh@12 -- # rm file_zero1 00:14:30.527 14:33:39 -- dd/sparse.sh@13 -- # rm file_zero2 00:14:30.527 14:33:39 -- dd/sparse.sh@14 -- # rm file_zero3 00:14:30.527 00:14:30.527 real 0m2.180s 00:14:30.527 user 0m1.369s 00:14:30.527 sys 0m0.944s 00:14:30.527 14:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.527 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:30.527 ************************************ 00:14:30.527 END TEST spdk_dd_sparse 00:14:30.527 ************************************ 00:14:30.797 14:33:39 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:14:30.797 14:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.797 14:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.797 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:30.797 ************************************ 00:14:30.797 START TEST spdk_dd_negative 00:14:30.797 ************************************ 00:14:30.797 14:33:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:14:30.797 * Looking for test storage... 00:14:30.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:30.797 14:33:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.797 14:33:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.797 14:33:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.797 14:33:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.797 14:33:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.797 14:33:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.797 14:33:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.797 14:33:39 -- paths/export.sh@5 -- # export PATH 00:14:30.797 14:33:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.797 14:33:39 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:30.797 14:33:39 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:30.797 14:33:39 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:30.797 14:33:39 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:30.797 14:33:39 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:14:30.797 14:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.797 14:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.797 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:30.797 ************************************ 00:14:30.797 START TEST dd_invalid_arguments 00:14:30.797 ************************************ 00:14:30.797 14:33:39 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:14:30.797 14:33:39 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:30.797 14:33:39 -- common/autotest_common.sh@638 -- # local es=0 00:14:30.797 14:33:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:30.797 14:33:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.797 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:30.797 14:33:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.797 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:30.797 14:33:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.797 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:30.797 14:33:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.797 14:33:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:30.797 14:33:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:31.056 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:14:31.056 options: 00:14:31.056 -c, --config JSON config file 00:14:31.056 --json JSON config file 00:14:31.056 --json-ignore-init-errors 00:14:31.056 don't exit on invalid config entry 00:14:31.056 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:14:31.056 -g, --single-file-segments 00:14:31.056 force creating just one hugetlbfs file 00:14:31.056 -h, --help show this usage 00:14:31.056 -i, --shm-id shared memory ID (optional) 00:14:31.056 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:14:31.056 --lcores lcore to CPU mapping list. The list is in the format: 00:14:31.056 [<,lcores[@CPUs]>...] 00:14:31.056 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:14:31.056 Within the group, '-' is used for range separator, 00:14:31.056 ',' is used for single number separator. 00:14:31.056 '( )' can be omitted for single element group, 00:14:31.056 '@' can be omitted if cpus and lcores have the same value 00:14:31.056 -n, --mem-channels channel number of memory channels used for DPDK 00:14:31.056 -p, --main-core main (primary) core for DPDK 00:14:31.056 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:14:31.056 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:14:31.056 --disable-cpumask-locks Disable CPU core lock files. 00:14:31.056 --silence-noticelog disable notice level logging to stderr 00:14:31.056 --msg-mempool-size global message memory pool size in count (default: 262143) 00:14:31.056 -u, --no-pci disable PCI access 00:14:31.056 --wait-for-rpc wait for RPCs to initialize subsystems 00:14:31.056 --max-delay maximum reactor delay (in microseconds) 00:14:31.056 -B, --pci-blocked pci addr to block (can be used more than once) 00:14:31.056 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:14:31.056 -R, --huge-unlink unlink huge files after initialization 00:14:31.056 -v, --version print SPDK version 00:14:31.056 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:14:31.056 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:14:31.056 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:14:31.056 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:14:31.056 Tracepoints vary in size and can use more than one trace entry. 00:14:31.056 --rpcs-allowed comma-separated list of permitted RPCS 00:14:31.056 --env-context Opaque context for use of the env implementation 00:14:31.056 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:14:31.056 --no-huge run without using hugepages 00:14:31.056 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:14:31.056 -e, --tpoint-group [:] 00:14:31.056 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all) 00:14:31.056 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:14:31.056 Groups and masks can be combined (e.g/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:14:31.056 [2024-04-17 14:33:39.419051] spdk_dd.c:1479:main: *ERROR*: Invalid arguments 00:14:31.056 . thread,bdev:0x1). 00:14:31.056 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:14:31.056 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:14:31.056 [--------- DD Options ---------] 00:14:31.056 --if Input file. Must specify either --if or --ib. 00:14:31.056 --ib Input bdev. Must specifier either --if or --ib 00:14:31.056 --of Output file. Must specify either --of or --ob. 00:14:31.056 --ob Output bdev. Must specify either --of or --ob. 00:14:31.056 --iflag Input file flags. 00:14:31.056 --oflag Output file flags. 00:14:31.056 --bs I/O unit size (default: 4096) 00:14:31.056 --qd Queue depth (default: 2) 00:14:31.056 --count I/O unit count. The number of I/O units to copy. (default: all) 00:14:31.056 --skip Skip this many I/O units at start of input. (default: 0) 00:14:31.056 --seek Skip this many I/O units at start of output. (default: 0) 00:14:31.056 --aio Force usage of AIO. (by default io_uring is used if available) 00:14:31.056 --sparse Enable hole skipping in input target 00:14:31.056 Available iflag and oflag values: 00:14:31.056 append - append mode 00:14:31.056 direct - use direct I/O for data 00:14:31.056 directory - fail unless a directory 00:14:31.056 dsync - use synchronized I/O for data 00:14:31.056 noatime - do not update access time 00:14:31.056 noctty - do not assign controlling terminal from file 00:14:31.056 nofollow - do not follow symlinks 00:14:31.056 nonblock - use non-blocking I/O 00:14:31.056 sync - use synchronized I/O for data and metadata 00:14:31.056 14:33:39 -- common/autotest_common.sh@641 -- # es=2 00:14:31.056 14:33:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.056 14:33:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.056 14:33:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.056 00:14:31.056 real 0m0.078s 00:14:31.056 user 0m0.047s 00:14:31.056 sys 0m0.028s 00:14:31.056 14:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.056 ************************************ 00:14:31.056 END TEST dd_invalid_arguments 00:14:31.056 ************************************ 00:14:31.056 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.056 14:33:39 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:14:31.056 14:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.056 14:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.056 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.056 ************************************ 00:14:31.056 START TEST dd_double_input 00:14:31.056 ************************************ 00:14:31.057 14:33:39 -- common/autotest_common.sh@1111 -- # double_input 00:14:31.057 14:33:39 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:31.057 14:33:39 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.057 14:33:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:31.057 14:33:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.057 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.057 14:33:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.057 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.057 14:33:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.057 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.057 14:33:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.057 14:33:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.057 14:33:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:31.057 [2024-04-17 14:33:39.580657] spdk_dd.c:1486:main: *ERROR*: You may specify either --if or --ib, but not both. 00:14:31.057 14:33:39 -- common/autotest_common.sh@641 -- # es=22 00:14:31.057 14:33:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.057 14:33:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.057 14:33:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.057 00:14:31.057 real 0m0.058s 00:14:31.057 user 0m0.037s 00:14:31.057 sys 0m0.021s 00:14:31.057 14:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.057 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.057 ************************************ 00:14:31.057 END TEST dd_double_input 00:14:31.057 ************************************ 00:14:31.057 14:33:39 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:14:31.057 14:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.057 14:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.057 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.317 ************************************ 00:14:31.317 START TEST dd_double_output 00:14:31.317 ************************************ 00:14:31.317 14:33:39 -- common/autotest_common.sh@1111 -- # double_output 00:14:31.317 14:33:39 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:31.317 14:33:39 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.317 14:33:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:31.317 14:33:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.317 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.317 14:33:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.317 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.317 14:33:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.317 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.317 14:33:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.317 14:33:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.317 14:33:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:31.317 [2024-04-17 14:33:39.760217] spdk_dd.c:1492:main: *ERROR*: You may specify either --of or --ob, but not both. 00:14:31.317 14:33:39 -- common/autotest_common.sh@641 -- # es=22 00:14:31.317 14:33:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.317 14:33:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.317 14:33:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.317 00:14:31.317 real 0m0.077s 00:14:31.317 user 0m0.052s 00:14:31.317 sys 0m0.024s 00:14:31.317 14:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.317 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.317 ************************************ 00:14:31.317 END TEST dd_double_output 00:14:31.317 ************************************ 00:14:31.317 14:33:39 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:14:31.317 14:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.317 14:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.317 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.317 ************************************ 00:14:31.317 START TEST dd_no_input 00:14:31.317 ************************************ 00:14:31.317 14:33:39 -- common/autotest_common.sh@1111 -- # no_input 00:14:31.317 14:33:39 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:31.317 14:33:39 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.317 14:33:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:31.317 14:33:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.317 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.317 14:33:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.317 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.317 14:33:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.317 14:33:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.317 14:33:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.317 14:33:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.317 14:33:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:31.575 [2024-04-17 14:33:39.930495] spdk_dd.c:1498:main: *ERROR*: You must specify either --if or --ib 00:14:31.575 14:33:39 -- common/autotest_common.sh@641 -- # es=22 00:14:31.575 14:33:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.575 ************************************ 00:14:31.575 END TEST dd_no_input 00:14:31.575 ************************************ 00:14:31.575 14:33:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.575 14:33:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.575 00:14:31.575 real 0m0.078s 00:14:31.575 user 0m0.051s 00:14:31.575 sys 0m0.025s 00:14:31.575 14:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.575 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.575 14:33:39 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:14:31.575 14:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.575 14:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.575 14:33:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.575 ************************************ 00:14:31.575 START TEST dd_no_output 00:14:31.575 ************************************ 00:14:31.575 14:33:40 -- common/autotest_common.sh@1111 -- # no_output 00:14:31.575 14:33:40 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:31.575 14:33:40 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.575 14:33:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:31.575 14:33:40 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.575 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.575 14:33:40 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.575 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.576 14:33:40 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.576 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.576 14:33:40 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.576 14:33:40 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.576 14:33:40 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:31.576 [2024-04-17 14:33:40.111585] spdk_dd.c:1504:main: *ERROR*: You must specify either --of or --ob 00:14:31.576 14:33:40 -- common/autotest_common.sh@641 -- # es=22 00:14:31.576 14:33:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.576 14:33:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.576 14:33:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.576 00:14:31.576 real 0m0.060s 00:14:31.576 user 0m0.036s 00:14:31.576 sys 0m0.023s 00:14:31.576 14:33:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.576 ************************************ 00:14:31.576 END TEST dd_no_output 00:14:31.576 ************************************ 00:14:31.576 14:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:31.576 14:33:40 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:14:31.576 14:33:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.576 14:33:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.576 14:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:31.834 ************************************ 00:14:31.834 START TEST dd_wrong_blocksize 00:14:31.834 ************************************ 00:14:31.834 14:33:40 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:14:31.834 14:33:40 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:31.834 14:33:40 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.834 14:33:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:31.834 14:33:40 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.834 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.834 14:33:40 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.834 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.834 14:33:40 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.834 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.834 14:33:40 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.834 14:33:40 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.834 14:33:40 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:31.834 [2024-04-17 14:33:40.277545] spdk_dd.c:1510:main: *ERROR*: Invalid --bs value 00:14:31.834 14:33:40 -- common/autotest_common.sh@641 -- # es=22 00:14:31.834 14:33:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.834 14:33:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.834 14:33:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.834 00:14:31.834 real 0m0.062s 00:14:31.834 user 0m0.039s 00:14:31.834 sys 0m0.022s 00:14:31.834 14:33:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.834 14:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:31.834 ************************************ 00:14:31.834 END TEST dd_wrong_blocksize 00:14:31.834 ************************************ 00:14:31.834 14:33:40 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:14:31.834 14:33:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.834 14:33:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.834 14:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:31.834 ************************************ 00:14:31.834 START TEST dd_smaller_blocksize 00:14:31.834 ************************************ 00:14:31.834 14:33:40 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:14:31.834 14:33:40 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:31.834 14:33:40 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.834 14:33:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:31.834 14:33:40 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.834 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.834 14:33:40 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.834 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.834 14:33:40 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.834 14:33:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.834 14:33:40 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.834 14:33:40 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.834 14:33:40 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:32.092 [2024-04-17 14:33:40.443516] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:32.092 [2024-04-17 14:33:40.443598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64370 ] 00:14:32.092 [2024-04-17 14:33:40.599274] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.092 [2024-04-17 14:33:40.680858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.350 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:14:32.608 [2024-04-17 14:33:40.966932] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:14:32.608 [2024-04-17 14:33:40.967027] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:32.608 [2024-04-17 14:33:41.034873] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:14:32.608 14:33:41 -- common/autotest_common.sh@641 -- # es=244 00:14:32.608 14:33:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:32.608 14:33:41 -- common/autotest_common.sh@650 -- # es=116 00:14:32.608 14:33:41 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:32.608 14:33:41 -- common/autotest_common.sh@658 -- # es=1 00:14:32.608 14:33:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:32.608 00:14:32.608 real 0m0.753s 00:14:32.608 user 0m0.380s 00:14:32.608 sys 0m0.266s 00:14:32.608 14:33:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:32.608 14:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:32.608 ************************************ 00:14:32.608 END TEST dd_smaller_blocksize 00:14:32.608 ************************************ 00:14:32.608 14:33:41 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:14:32.608 14:33:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:32.608 14:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.608 14:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:32.866 ************************************ 00:14:32.866 START TEST dd_invalid_count 00:14:32.866 ************************************ 00:14:32.866 14:33:41 -- common/autotest_common.sh@1111 -- # invalid_count 00:14:32.866 14:33:41 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:32.866 14:33:41 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.866 14:33:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:32.866 14:33:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.866 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.866 14:33:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.866 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.866 14:33:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.866 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.866 14:33:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.866 14:33:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:32.866 14:33:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:32.866 [2024-04-17 14:33:41.302588] spdk_dd.c:1516:main: *ERROR*: Invalid --count value 00:14:32.866 14:33:41 -- common/autotest_common.sh@641 -- # es=22 00:14:32.866 ************************************ 00:14:32.866 END TEST dd_invalid_count 00:14:32.866 ************************************ 00:14:32.866 14:33:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:32.866 14:33:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:32.866 14:33:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:32.866 00:14:32.866 real 0m0.060s 00:14:32.866 user 0m0.035s 00:14:32.866 sys 0m0.024s 00:14:32.866 14:33:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:32.866 14:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:32.866 14:33:41 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:14:32.866 14:33:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:32.866 14:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.866 14:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:32.866 ************************************ 00:14:32.867 START TEST dd_invalid_oflag 00:14:32.867 ************************************ 00:14:32.867 14:33:41 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:14:32.867 14:33:41 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:32.867 14:33:41 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.867 14:33:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:32.867 14:33:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.867 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.867 14:33:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.867 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.867 14:33:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.867 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.867 14:33:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.867 14:33:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:32.867 14:33:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:33.125 [2024-04-17 14:33:41.488674] spdk_dd.c:1522:main: *ERROR*: --oflags may be used only with --of 00:14:33.125 14:33:41 -- common/autotest_common.sh@641 -- # es=22 00:14:33.125 14:33:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.125 14:33:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:33.125 14:33:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.125 00:14:33.125 real 0m0.082s 00:14:33.125 user 0m0.054s 00:14:33.125 sys 0m0.027s 00:14:33.125 14:33:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.125 14:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.125 ************************************ 00:14:33.125 END TEST dd_invalid_oflag 00:14:33.125 ************************************ 00:14:33.125 14:33:41 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:14:33.125 14:33:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.125 14:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.125 14:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.125 ************************************ 00:14:33.125 START TEST dd_invalid_iflag 00:14:33.125 ************************************ 00:14:33.125 14:33:41 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:14:33.125 14:33:41 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:33.125 14:33:41 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.125 14:33:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:33.125 14:33:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.125 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.125 14:33:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.125 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.125 14:33:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.125 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.125 14:33:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.125 14:33:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:33.125 14:33:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:33.125 [2024-04-17 14:33:41.654531] spdk_dd.c:1528:main: *ERROR*: --iflags may be used only with --if 00:14:33.125 14:33:41 -- common/autotest_common.sh@641 -- # es=22 00:14:33.125 14:33:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.125 14:33:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:33.125 14:33:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.125 00:14:33.125 real 0m0.073s 00:14:33.125 user 0m0.048s 00:14:33.125 sys 0m0.024s 00:14:33.125 14:33:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.125 14:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.125 ************************************ 00:14:33.125 END TEST dd_invalid_iflag 00:14:33.125 ************************************ 00:14:33.125 14:33:41 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:14:33.125 14:33:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.125 14:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.125 14:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:33.384 ************************************ 00:14:33.384 START TEST dd_unknown_flag 00:14:33.384 ************************************ 00:14:33.384 14:33:41 -- common/autotest_common.sh@1111 -- # unknown_flag 00:14:33.384 14:33:41 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:33.384 14:33:41 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.384 14:33:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:33.384 14:33:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.384 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.384 14:33:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.384 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.384 14:33:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.384 14:33:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.384 14:33:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.384 14:33:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:33.384 14:33:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:33.384 [2024-04-17 14:33:41.810750] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:33.384 [2024-04-17 14:33:41.810868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64490 ] 00:14:33.384 [2024-04-17 14:33:41.947122] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.643 [2024-04-17 14:33:42.004649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.643 [2024-04-17 14:33:42.051379] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:14:33.643 [2024-04-17 14:33:42.051443] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.643 [2024-04-17 14:33:42.051502] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:14:33.643 [2024-04-17 14:33:42.051516] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.643 [2024-04-17 14:33:42.051730] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:14:33.643 [2024-04-17 14:33:42.051747] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.643 [2024-04-17 14:33:42.051800] app.c: 946:app_stop: *NOTICE*: spdk_app_stop called twice 00:14:33.643 [2024-04-17 14:33:42.051823] app.c: 946:app_stop: *NOTICE*: spdk_app_stop called twice 00:14:33.643 [2024-04-17 14:33:42.115237] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:14:33.643 14:33:42 -- common/autotest_common.sh@641 -- # es=234 00:14:33.643 14:33:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.643 14:33:42 -- common/autotest_common.sh@650 -- # es=106 00:14:33.643 14:33:42 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:33.643 14:33:42 -- common/autotest_common.sh@658 -- # es=1 00:14:33.643 14:33:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.643 00:14:33.643 real 0m0.467s 00:14:33.643 user 0m0.270s 00:14:33.643 sys 0m0.102s 00:14:33.643 14:33:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.643 ************************************ 00:14:33.643 END TEST dd_unknown_flag 00:14:33.643 ************************************ 00:14:33.643 14:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:33.903 14:33:42 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:14:33.903 14:33:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.903 14:33:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.903 14:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:33.903 ************************************ 00:14:33.903 START TEST dd_invalid_json 00:14:33.903 ************************************ 00:14:33.903 14:33:42 -- common/autotest_common.sh@1111 -- # invalid_json 00:14:33.903 14:33:42 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:33.903 14:33:42 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.903 14:33:42 -- dd/negative_dd.sh@95 -- # : 00:14:33.903 14:33:42 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:33.903 14:33:42 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.903 14:33:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.903 14:33:42 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.903 14:33:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.903 14:33:42 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.903 14:33:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.903 14:33:42 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.903 14:33:42 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:33.903 14:33:42 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:33.903 [2024-04-17 14:33:42.374024] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:33.903 [2024-04-17 14:33:42.374124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64517 ] 00:14:34.161 [2024-04-17 14:33:42.548865] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.161 [2024-04-17 14:33:42.607764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.161 [2024-04-17 14:33:42.607838] json_config.c: 509:parse_json: *ERROR*: JSON data cannot be empty 00:14:34.161 [2024-04-17 14:33:42.607856] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:34.161 [2024-04-17 14:33:42.607866] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:34.161 [2024-04-17 14:33:42.607904] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:14:34.161 14:33:42 -- common/autotest_common.sh@641 -- # es=234 00:14:34.161 14:33:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:34.161 14:33:42 -- common/autotest_common.sh@650 -- # es=106 00:14:34.161 14:33:42 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:34.161 14:33:42 -- common/autotest_common.sh@658 -- # es=1 00:14:34.161 14:33:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:34.161 00:14:34.161 real 0m0.395s 00:14:34.161 user 0m0.241s 00:14:34.161 sys 0m0.051s 00:14:34.161 14:33:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.161 14:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.161 ************************************ 00:14:34.161 END TEST dd_invalid_json 00:14:34.161 ************************************ 00:14:34.161 00:14:34.161 real 0m3.537s 00:14:34.161 user 0m1.761s 00:14:34.161 sys 0m1.333s 00:14:34.161 14:33:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.161 14:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.161 ************************************ 00:14:34.161 END TEST spdk_dd_negative 00:14:34.161 ************************************ 00:14:34.420 00:14:34.420 real 1m13.719s 00:14:34.420 user 0m49.022s 00:14:34.420 sys 0m29.944s 00:14:34.420 14:33:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.420 14:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.420 ************************************ 00:14:34.420 END TEST spdk_dd 00:14:34.420 ************************************ 00:14:34.420 14:33:42 -- spdk/autotest.sh@206 -- # '[' 0 -eq 1 ']' 00:14:34.420 14:33:42 -- spdk/autotest.sh@253 -- # '[' 0 -eq 1 ']' 00:14:34.420 14:33:42 -- spdk/autotest.sh@257 -- # timing_exit lib 00:14:34.420 14:33:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:34.420 14:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.420 14:33:42 -- spdk/autotest.sh@259 -- # '[' 0 -eq 1 ']' 00:14:34.420 14:33:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:14:34.420 14:33:42 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:14:34.420 14:33:42 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:14:34.420 14:33:42 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:14:34.420 14:33:42 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:14:34.420 14:33:42 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:14:34.420 14:33:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:34.420 14:33:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.420 14:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:34.420 ************************************ 00:14:34.421 START TEST nvmf_tcp 00:14:34.421 ************************************ 00:14:34.421 14:33:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:14:34.421 * Looking for test storage... 00:14:34.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:34.421 14:33:42 -- nvmf/nvmf.sh@10 -- # uname -s 00:14:34.421 14:33:42 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:14:34.421 14:33:42 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.421 14:33:42 -- nvmf/common.sh@7 -- # uname -s 00:14:34.421 14:33:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.421 14:33:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.421 14:33:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.421 14:33:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.421 14:33:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.421 14:33:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.421 14:33:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.421 14:33:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.421 14:33:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.421 14:33:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.421 14:33:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:14:34.421 14:33:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:14:34.421 14:33:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.421 14:33:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.421 14:33:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.421 14:33:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.421 14:33:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.421 14:33:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.421 14:33:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.421 14:33:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.421 14:33:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.421 14:33:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.421 14:33:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.421 14:33:43 -- paths/export.sh@5 -- # export PATH 00:14:34.421 14:33:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.421 14:33:43 -- nvmf/common.sh@47 -- # : 0 00:14:34.421 14:33:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.421 14:33:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.421 14:33:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.421 14:33:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.421 14:33:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.421 14:33:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.421 14:33:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.421 14:33:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.421 14:33:43 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:34.421 14:33:43 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:14:34.421 14:33:43 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:14:34.421 14:33:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:34.421 14:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:34.421 14:33:43 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:14:34.421 14:33:43 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:34.421 14:33:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:34.421 14:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.680 14:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:34.680 ************************************ 00:14:34.680 START TEST nvmf_host_management 00:14:34.680 ************************************ 00:14:34.680 14:33:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:34.680 * Looking for test storage... 00:14:34.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:34.680 14:33:43 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.680 14:33:43 -- nvmf/common.sh@7 -- # uname -s 00:14:34.680 14:33:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.680 14:33:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.680 14:33:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.680 14:33:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.680 14:33:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.680 14:33:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.680 14:33:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.680 14:33:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.680 14:33:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.680 14:33:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.680 14:33:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:14:34.680 14:33:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:14:34.680 14:33:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.680 14:33:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.680 14:33:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.680 14:33:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.680 14:33:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.680 14:33:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.680 14:33:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.680 14:33:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.680 14:33:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.680 14:33:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.680 14:33:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.680 14:33:43 -- paths/export.sh@5 -- # export PATH 00:14:34.680 14:33:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.680 14:33:43 -- nvmf/common.sh@47 -- # : 0 00:14:34.680 14:33:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.680 14:33:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.680 14:33:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.680 14:33:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.680 14:33:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.680 14:33:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.680 14:33:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.680 14:33:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.680 14:33:43 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:34.680 14:33:43 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:34.680 14:33:43 -- target/host_management.sh@104 -- # nvmftestinit 00:14:34.680 14:33:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:34.680 14:33:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.680 14:33:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:34.680 14:33:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:34.680 14:33:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:34.680 14:33:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.680 14:33:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.680 14:33:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.680 14:33:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:34.680 14:33:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:34.680 14:33:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:34.680 14:33:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:34.680 14:33:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:34.680 14:33:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:34.680 14:33:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.680 14:33:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.680 14:33:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:34.680 14:33:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:34.680 14:33:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.680 14:33:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.680 14:33:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.680 14:33:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.680 14:33:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.680 14:33:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.680 14:33:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.680 14:33:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.680 14:33:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:34.680 Cannot find device "nvmf_init_br" 00:14:34.680 14:33:43 -- nvmf/common.sh@154 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:34.680 Cannot find device "nvmf_tgt_br" 00:14:34.680 14:33:43 -- nvmf/common.sh@155 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.680 Cannot find device "nvmf_tgt_br2" 00:14:34.680 14:33:43 -- nvmf/common.sh@156 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:34.680 Cannot find device "nvmf_init_br" 00:14:34.680 14:33:43 -- nvmf/common.sh@157 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:34.680 Cannot find device "nvmf_tgt_br" 00:14:34.680 14:33:43 -- nvmf/common.sh@158 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:34.680 Cannot find device "nvmf_tgt_br2" 00:14:34.680 14:33:43 -- nvmf/common.sh@159 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:34.680 Cannot find device "nvmf_br" 00:14:34.680 14:33:43 -- nvmf/common.sh@160 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:34.680 Cannot find device "nvmf_init_if" 00:14:34.680 14:33:43 -- nvmf/common.sh@161 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.680 14:33:43 -- nvmf/common.sh@162 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.680 14:33:43 -- nvmf/common.sh@163 -- # true 00:14:34.680 14:33:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:34.680 14:33:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:34.938 14:33:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:34.938 14:33:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:34.938 14:33:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:34.938 14:33:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:34.938 14:33:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:34.938 14:33:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:34.938 14:33:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:34.938 14:33:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:34.938 14:33:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:34.938 14:33:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:34.938 14:33:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:34.939 14:33:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:34.939 14:33:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:34.939 14:33:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:34.939 14:33:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:34.939 14:33:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:34.939 14:33:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:34.939 14:33:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:34.939 14:33:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:34.939 14:33:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:34.939 14:33:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:34.939 14:33:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:34.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:14:34.939 00:14:34.939 --- 10.0.0.2 ping statistics --- 00:14:34.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.939 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:34.939 14:33:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:34.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:34.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:14:34.939 00:14:34.939 --- 10.0.0.3 ping statistics --- 00:14:34.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.939 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:34.939 14:33:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:34.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:34.939 00:14:34.939 --- 10.0.0.1 ping statistics --- 00:14:34.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.939 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:34.939 14:33:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.939 14:33:43 -- nvmf/common.sh@422 -- # return 0 00:14:34.939 14:33:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:34.939 14:33:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.939 14:33:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:34.939 14:33:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:34.939 14:33:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.939 14:33:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:34.939 14:33:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:35.197 14:33:43 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:35.197 14:33:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:35.197 14:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.197 14:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:35.197 ************************************ 00:14:35.197 START TEST nvmf_host_management 00:14:35.197 ************************************ 00:14:35.197 14:33:43 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:14:35.197 14:33:43 -- target/host_management.sh@69 -- # starttarget 00:14:35.197 14:33:43 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:35.197 14:33:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:35.197 14:33:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:35.197 14:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:35.197 14:33:43 -- nvmf/common.sh@470 -- # nvmfpid=64792 00:14:35.197 14:33:43 -- nvmf/common.sh@471 -- # waitforlisten 64792 00:14:35.197 14:33:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:35.197 14:33:43 -- common/autotest_common.sh@817 -- # '[' -z 64792 ']' 00:14:35.197 14:33:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.197 14:33:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:35.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.197 14:33:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.197 14:33:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:35.197 14:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:35.197 [2024-04-17 14:33:43.706776] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:35.197 [2024-04-17 14:33:43.706887] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.455 [2024-04-17 14:33:43.847612] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.455 [2024-04-17 14:33:43.921994] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.455 [2024-04-17 14:33:43.922069] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.455 [2024-04-17 14:33:43.922084] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.455 [2024-04-17 14:33:43.922094] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.455 [2024-04-17 14:33:43.922103] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.455 [2024-04-17 14:33:43.922216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.455 [2024-04-17 14:33:43.922332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:35.455 [2024-04-17 14:33:43.922342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.455 [2024-04-17 14:33:43.922264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.387 14:33:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:36.387 14:33:44 -- common/autotest_common.sh@850 -- # return 0 00:14:36.387 14:33:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:36.387 14:33:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:36.387 14:33:44 -- common/autotest_common.sh@10 -- # set +x 00:14:36.387 14:33:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.387 14:33:44 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.387 14:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.387 14:33:44 -- common/autotest_common.sh@10 -- # set +x 00:14:36.387 [2024-04-17 14:33:44.893983] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.387 14:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.388 14:33:44 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:36.388 14:33:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:36.388 14:33:44 -- common/autotest_common.sh@10 -- # set +x 00:14:36.388 14:33:44 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:36.388 14:33:44 -- target/host_management.sh@23 -- # cat 00:14:36.388 14:33:44 -- target/host_management.sh@30 -- # rpc_cmd 00:14:36.388 14:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.388 14:33:44 -- common/autotest_common.sh@10 -- # set +x 00:14:36.388 Malloc0 00:14:36.388 [2024-04-17 14:33:44.961430] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.388 14:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.388 14:33:44 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:36.388 14:33:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:36.388 14:33:44 -- common/autotest_common.sh@10 -- # set +x 00:14:36.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.645 14:33:45 -- target/host_management.sh@73 -- # perfpid=64857 00:14:36.645 14:33:45 -- target/host_management.sh@74 -- # waitforlisten 64857 /var/tmp/bdevperf.sock 00:14:36.645 14:33:45 -- common/autotest_common.sh@817 -- # '[' -z 64857 ']' 00:14:36.645 14:33:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.645 14:33:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.645 14:33:45 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:36.645 14:33:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.645 14:33:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.645 14:33:45 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:36.645 14:33:45 -- common/autotest_common.sh@10 -- # set +x 00:14:36.645 14:33:45 -- nvmf/common.sh@521 -- # config=() 00:14:36.645 14:33:45 -- nvmf/common.sh@521 -- # local subsystem config 00:14:36.645 14:33:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:36.645 14:33:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:36.645 { 00:14:36.645 "params": { 00:14:36.645 "name": "Nvme$subsystem", 00:14:36.645 "trtype": "$TEST_TRANSPORT", 00:14:36.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.645 "adrfam": "ipv4", 00:14:36.645 "trsvcid": "$NVMF_PORT", 00:14:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.645 "hdgst": ${hdgst:-false}, 00:14:36.645 "ddgst": ${ddgst:-false} 00:14:36.645 }, 00:14:36.645 "method": "bdev_nvme_attach_controller" 00:14:36.645 } 00:14:36.645 EOF 00:14:36.645 )") 00:14:36.645 14:33:45 -- nvmf/common.sh@543 -- # cat 00:14:36.645 14:33:45 -- nvmf/common.sh@545 -- # jq . 00:14:36.645 14:33:45 -- nvmf/common.sh@546 -- # IFS=, 00:14:36.645 14:33:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:36.645 "params": { 00:14:36.645 "name": "Nvme0", 00:14:36.645 "trtype": "tcp", 00:14:36.645 "traddr": "10.0.0.2", 00:14:36.645 "adrfam": "ipv4", 00:14:36.645 "trsvcid": "4420", 00:14:36.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:36.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:36.645 "hdgst": false, 00:14:36.645 "ddgst": false 00:14:36.645 }, 00:14:36.645 "method": "bdev_nvme_attach_controller" 00:14:36.645 }' 00:14:36.645 [2024-04-17 14:33:45.060516] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:36.645 [2024-04-17 14:33:45.060631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64857 ] 00:14:36.645 [2024-04-17 14:33:45.203524] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.903 [2024-04-17 14:33:45.274278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.903 Running I/O for 10 seconds... 00:14:37.469 14:33:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:37.469 14:33:46 -- common/autotest_common.sh@850 -- # return 0 00:14:37.469 14:33:46 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:37.469 14:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.731 14:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 14:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.731 14:33:46 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.731 14:33:46 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:37.731 14:33:46 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:37.731 14:33:46 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:37.731 14:33:46 -- target/host_management.sh@52 -- # local ret=1 00:14:37.731 14:33:46 -- target/host_management.sh@53 -- # local i 00:14:37.731 14:33:46 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:37.731 14:33:46 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:37.731 14:33:46 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:37.731 14:33:46 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:37.731 14:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.731 14:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 14:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.731 14:33:46 -- target/host_management.sh@55 -- # read_io_count=835 00:14:37.731 14:33:46 -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:14:37.731 14:33:46 -- target/host_management.sh@59 -- # ret=0 00:14:37.731 14:33:46 -- target/host_management.sh@60 -- # break 00:14:37.731 14:33:46 -- target/host_management.sh@64 -- # return 0 00:14:37.731 14:33:46 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:37.731 14:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.731 14:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.731 [2024-04-17 14:33:46.150521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.150992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.731 [2024-04-17 14:33:46.151115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.151128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.151141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.151154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.151167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.151181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.151196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.151210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.151224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6640 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.154473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.732 14:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.732 14:33:46 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:37.732 [2024-04-17 14:33:46.155046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 14:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.732 [2024-04-17 14:33:46.155214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.732 [2024-04-17 14:33:46.155331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 14:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.732 [2024-04-17 14:33:46.155431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.732 [2024-04-17 14:33:46.155544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.155650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.732 [2024-04-17 14:33:46.155760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.155863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f1b0 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.156654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.156820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.156937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.157089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.157317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.157452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.157551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.157640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.157860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.157994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.158121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.158215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.158469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.158626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.158730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.158839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.158970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.159244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.159376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.159483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.159590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.159810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.159922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.160051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.160143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.160373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.160509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.160635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.160753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.160997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.161136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.161255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.161365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.161589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.161723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.161820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.161928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.162208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.162342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.162464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.162697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 14:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.732 [2024-04-17 14:33:46.162827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 14:33:46 -- target/host_management.sh@87 -- # sleep 1 00:14:37.732 [2024-04-17 14:33:46.162922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.163075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.163319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.163442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.163560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.163690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.163936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.164077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.164173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.164280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.164372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.164653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.164771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.164856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.164937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.165193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.165309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.165421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.165537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.165784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.165901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.166015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.166133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.166360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.166473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.166579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.166674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.166885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.167025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.167150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.167252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.167497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.167614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.167732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.167827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.168067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.168188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.168307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.168420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.168700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.168827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.168918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.169055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.169152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.169374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.169503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.169614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.169720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.169965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.170097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.170206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.170309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.170558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.170696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.170807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.170922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.171050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.171278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.171392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.171481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.171596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.171826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.171981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.172109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.172343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.172472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.172573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.172661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.172895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.173027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.173126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.173237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.173474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.173612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.173710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.173819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.174083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.174215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.174263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.174282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.174297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.174314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.174328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.174346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.732 [2024-04-17 14:33:46.174360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.732 [2024-04-17 14:33:46.174375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434940 is same with the state(5) to be set 00:14:37.732 [2024-04-17 14:33:46.174445] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1434940 was disconnected and freed. reset controller. 00:14:37.733 [2024-04-17 14:33:46.174527] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140f1b0 (9): Bad file descriptor 00:14:37.733 [2024-04-17 14:33:46.175938] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:37.733 task offset: 114688 on job bdev=Nvme0n1 fails 00:14:37.733 00:14:37.733 Latency(us) 00:14:37.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.733 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:37.733 Job: Nvme0n1 ended in about 0.75 seconds with error 00:14:37.733 Verification LBA range: start 0x0 length 0x400 00:14:37.733 Nvme0n1 : 0.75 1196.58 74.79 85.47 0.00 48516.96 11141.12 53620.36 00:14:37.733 =================================================================================================================== 00:14:37.733 Total : 1196.58 74.79 85.47 0.00 48516.96 11141.12 53620.36 00:14:37.733 [2024-04-17 14:33:46.178512] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:37.733 [2024-04-17 14:33:46.184736] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:38.666 14:33:47 -- target/host_management.sh@91 -- # kill -9 64857 00:14:38.666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64857) - No such process 00:14:38.666 14:33:47 -- target/host_management.sh@91 -- # true 00:14:38.666 14:33:47 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:38.666 14:33:47 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:38.666 14:33:47 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:38.666 14:33:47 -- nvmf/common.sh@521 -- # config=() 00:14:38.666 14:33:47 -- nvmf/common.sh@521 -- # local subsystem config 00:14:38.666 14:33:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:38.666 14:33:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:38.666 { 00:14:38.666 "params": { 00:14:38.666 "name": "Nvme$subsystem", 00:14:38.666 "trtype": "$TEST_TRANSPORT", 00:14:38.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.666 "adrfam": "ipv4", 00:14:38.666 "trsvcid": "$NVMF_PORT", 00:14:38.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.666 "hdgst": ${hdgst:-false}, 00:14:38.666 "ddgst": ${ddgst:-false} 00:14:38.666 }, 00:14:38.666 "method": "bdev_nvme_attach_controller" 00:14:38.666 } 00:14:38.667 EOF 00:14:38.667 )") 00:14:38.667 14:33:47 -- nvmf/common.sh@543 -- # cat 00:14:38.667 14:33:47 -- nvmf/common.sh@545 -- # jq . 00:14:38.667 14:33:47 -- nvmf/common.sh@546 -- # IFS=, 00:14:38.667 14:33:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:38.667 "params": { 00:14:38.667 "name": "Nvme0", 00:14:38.667 "trtype": "tcp", 00:14:38.667 "traddr": "10.0.0.2", 00:14:38.667 "adrfam": "ipv4", 00:14:38.667 "trsvcid": "4420", 00:14:38.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:38.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:38.667 "hdgst": false, 00:14:38.667 "ddgst": false 00:14:38.667 }, 00:14:38.667 "method": "bdev_nvme_attach_controller" 00:14:38.667 }' 00:14:38.667 [2024-04-17 14:33:47.216193] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:38.667 [2024-04-17 14:33:47.216286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64895 ] 00:14:38.924 [2024-04-17 14:33:47.386717] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.924 [2024-04-17 14:33:47.469752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.182 Running I/O for 1 seconds... 00:14:40.116 00:14:40.116 Latency(us) 00:14:40.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.116 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:40.116 Verification LBA range: start 0x0 length 0x400 00:14:40.116 Nvme0n1 : 1.03 1430.64 89.42 0.00 0.00 43393.77 3991.74 49330.73 00:14:40.116 =================================================================================================================== 00:14:40.116 Total : 1430.64 89.42 0.00 0.00 43393.77 3991.74 49330.73 00:14:40.374 14:33:48 -- target/host_management.sh@101 -- # stoptarget 00:14:40.374 14:33:48 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:40.374 14:33:48 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:40.374 14:33:48 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:40.374 14:33:48 -- target/host_management.sh@40 -- # nvmftestfini 00:14:40.374 14:33:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:40.374 14:33:48 -- nvmf/common.sh@117 -- # sync 00:14:40.632 14:33:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.632 14:33:49 -- nvmf/common.sh@120 -- # set +e 00:14:40.632 14:33:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.632 14:33:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.632 rmmod nvme_tcp 00:14:40.632 rmmod nvme_fabrics 00:14:40.632 rmmod nvme_keyring 00:14:40.632 14:33:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.632 14:33:49 -- nvmf/common.sh@124 -- # set -e 00:14:40.632 14:33:49 -- nvmf/common.sh@125 -- # return 0 00:14:40.632 14:33:49 -- nvmf/common.sh@478 -- # '[' -n 64792 ']' 00:14:40.632 14:33:49 -- nvmf/common.sh@479 -- # killprocess 64792 00:14:40.632 14:33:49 -- common/autotest_common.sh@936 -- # '[' -z 64792 ']' 00:14:40.632 14:33:49 -- common/autotest_common.sh@940 -- # kill -0 64792 00:14:40.632 14:33:49 -- common/autotest_common.sh@941 -- # uname 00:14:40.632 14:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.632 14:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64792 00:14:40.632 killing process with pid 64792 00:14:40.632 14:33:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:40.632 14:33:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:40.632 14:33:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64792' 00:14:40.632 14:33:49 -- common/autotest_common.sh@955 -- # kill 64792 00:14:40.632 14:33:49 -- common/autotest_common.sh@960 -- # wait 64792 00:14:40.890 [2024-04-17 14:33:49.269624] app.c: 628:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:40.890 14:33:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:40.890 14:33:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:40.890 14:33:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:40.890 14:33:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.890 14:33:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:40.890 14:33:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.890 14:33:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.890 14:33:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.890 14:33:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:40.890 00:14:40.890 real 0m5.695s 00:14:40.890 user 0m24.167s 00:14:40.890 sys 0m1.238s 00:14:40.890 ************************************ 00:14:40.890 END TEST nvmf_host_management 00:14:40.890 ************************************ 00:14:40.890 14:33:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.890 14:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:40.890 14:33:49 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:40.890 00:14:40.890 real 0m6.272s 00:14:40.890 user 0m24.308s 00:14:40.890 sys 0m1.480s 00:14:40.890 ************************************ 00:14:40.890 END TEST nvmf_host_management 00:14:40.890 ************************************ 00:14:40.890 14:33:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.890 14:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:40.890 14:33:49 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:40.890 14:33:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:40.890 14:33:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.890 14:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:40.890 ************************************ 00:14:40.890 START TEST nvmf_lvol 00:14:40.890 ************************************ 00:14:40.890 14:33:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:41.163 * Looking for test storage... 00:14:41.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:41.163 14:33:49 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.163 14:33:49 -- nvmf/common.sh@7 -- # uname -s 00:14:41.163 14:33:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.163 14:33:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.163 14:33:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.163 14:33:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.163 14:33:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.163 14:33:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.163 14:33:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.163 14:33:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.163 14:33:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.163 14:33:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.163 14:33:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:14:41.163 14:33:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:14:41.163 14:33:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.163 14:33:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.163 14:33:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.163 14:33:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.163 14:33:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.163 14:33:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.163 14:33:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.163 14:33:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.163 14:33:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.163 14:33:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.163 14:33:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.163 14:33:49 -- paths/export.sh@5 -- # export PATH 00:14:41.163 14:33:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.163 14:33:49 -- nvmf/common.sh@47 -- # : 0 00:14:41.163 14:33:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.163 14:33:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.163 14:33:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.163 14:33:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.163 14:33:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.163 14:33:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.163 14:33:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.163 14:33:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.163 14:33:49 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.163 14:33:49 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.163 14:33:49 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:41.163 14:33:49 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:41.163 14:33:49 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.163 14:33:49 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:41.163 14:33:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:41.163 14:33:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.163 14:33:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:41.163 14:33:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:41.163 14:33:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:41.163 14:33:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.163 14:33:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.163 14:33:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.163 14:33:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:41.163 14:33:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:41.163 14:33:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:41.163 14:33:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:41.163 14:33:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:41.163 14:33:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:41.163 14:33:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.163 14:33:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.163 14:33:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:41.163 14:33:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:41.163 14:33:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.163 14:33:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.163 14:33:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.163 14:33:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.163 14:33:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.163 14:33:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.163 14:33:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.163 14:33:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.163 14:33:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:41.163 14:33:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:41.163 Cannot find device "nvmf_tgt_br" 00:14:41.163 14:33:49 -- nvmf/common.sh@155 -- # true 00:14:41.163 14:33:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.163 Cannot find device "nvmf_tgt_br2" 00:14:41.163 14:33:49 -- nvmf/common.sh@156 -- # true 00:14:41.163 14:33:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:41.163 14:33:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:41.163 Cannot find device "nvmf_tgt_br" 00:14:41.163 14:33:49 -- nvmf/common.sh@158 -- # true 00:14:41.163 14:33:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:41.163 Cannot find device "nvmf_tgt_br2" 00:14:41.163 14:33:49 -- nvmf/common.sh@159 -- # true 00:14:41.163 14:33:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:41.163 14:33:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:41.163 14:33:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.163 14:33:49 -- nvmf/common.sh@162 -- # true 00:14:41.163 14:33:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.163 14:33:49 -- nvmf/common.sh@163 -- # true 00:14:41.163 14:33:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:41.163 14:33:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:41.163 14:33:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:41.163 14:33:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:41.163 14:33:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:41.163 14:33:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:41.426 14:33:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:41.426 14:33:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:41.426 14:33:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:41.426 14:33:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:41.426 14:33:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:41.426 14:33:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:41.426 14:33:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:41.426 14:33:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.426 14:33:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.426 14:33:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.426 14:33:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:41.426 14:33:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:41.426 14:33:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.426 14:33:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.426 14:33:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.426 14:33:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.426 14:33:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.426 14:33:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:41.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:41.426 00:14:41.426 --- 10.0.0.2 ping statistics --- 00:14:41.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.426 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:41.426 14:33:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:41.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:41.426 00:14:41.426 --- 10.0.0.3 ping statistics --- 00:14:41.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.426 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:41.426 14:33:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:41.426 00:14:41.426 --- 10.0.0.1 ping statistics --- 00:14:41.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.426 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:41.426 14:33:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.426 14:33:49 -- nvmf/common.sh@422 -- # return 0 00:14:41.426 14:33:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:41.426 14:33:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.426 14:33:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:41.426 14:33:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:41.426 14:33:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.426 14:33:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:41.426 14:33:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:41.426 14:33:49 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:41.426 14:33:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:41.426 14:33:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:41.426 14:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:41.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.426 14:33:49 -- nvmf/common.sh@470 -- # nvmfpid=65129 00:14:41.426 14:33:49 -- nvmf/common.sh@471 -- # waitforlisten 65129 00:14:41.426 14:33:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:41.426 14:33:49 -- common/autotest_common.sh@817 -- # '[' -z 65129 ']' 00:14:41.426 14:33:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.426 14:33:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:41.426 14:33:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.426 14:33:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:41.426 14:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:41.426 [2024-04-17 14:33:49.981275] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:41.426 [2024-04-17 14:33:49.981408] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.693 [2024-04-17 14:33:50.129805] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:41.693 [2024-04-17 14:33:50.215821] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.693 [2024-04-17 14:33:50.216044] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.693 [2024-04-17 14:33:50.216182] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.693 [2024-04-17 14:33:50.216291] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.693 [2024-04-17 14:33:50.216382] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.693 [2024-04-17 14:33:50.216532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.693 [2024-04-17 14:33:50.216637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.693 [2024-04-17 14:33:50.216644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.630 14:33:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:42.630 14:33:50 -- common/autotest_common.sh@850 -- # return 0 00:14:42.630 14:33:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:42.630 14:33:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:42.630 14:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:42.630 14:33:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.630 14:33:50 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:42.630 [2024-04-17 14:33:51.140975] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.630 14:33:51 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:43.195 14:33:51 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:43.195 14:33:51 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:43.454 14:33:52 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:43.454 14:33:52 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:44.021 14:33:52 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:44.021 14:33:52 -- target/nvmf_lvol.sh@29 -- # lvs=b1249ecb-c954-44d8-8d96-89c816664c2e 00:14:44.021 14:33:52 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1249ecb-c954-44d8-8d96-89c816664c2e lvol 20 00:14:44.588 14:33:52 -- target/nvmf_lvol.sh@32 -- # lvol=f89f15f3-fa42-49f9-b23f-270fab7f55ed 00:14:44.588 14:33:52 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:44.588 14:33:53 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f89f15f3-fa42-49f9-b23f-270fab7f55ed 00:14:44.848 14:33:53 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:45.106 [2024-04-17 14:33:53.610764] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.106 14:33:53 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.365 14:33:53 -- target/nvmf_lvol.sh@42 -- # perf_pid=65210 00:14:45.365 14:33:53 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:45.365 14:33:53 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:46.739 14:33:54 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f89f15f3-fa42-49f9-b23f-270fab7f55ed MY_SNAPSHOT 00:14:46.739 14:33:55 -- target/nvmf_lvol.sh@47 -- # snapshot=9bd4d942-5325-4468-8215-90792b836964 00:14:46.739 14:33:55 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f89f15f3-fa42-49f9-b23f-270fab7f55ed 30 00:14:46.997 14:33:55 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9bd4d942-5325-4468-8215-90792b836964 MY_CLONE 00:14:47.255 14:33:55 -- target/nvmf_lvol.sh@49 -- # clone=a7de47e0-be62-4be0-902b-c3436bf8431b 00:14:47.255 14:33:55 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a7de47e0-be62-4be0-902b-c3436bf8431b 00:14:47.822 14:33:56 -- target/nvmf_lvol.sh@53 -- # wait 65210 00:14:55.934 Initializing NVMe Controllers 00:14:55.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:55.934 Controller IO queue size 128, less than required. 00:14:55.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:55.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:55.934 Initialization complete. Launching workers. 00:14:55.934 ======================================================== 00:14:55.934 Latency(us) 00:14:55.934 Device Information : IOPS MiB/s Average min max 00:14:55.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9579.19 37.42 13364.02 2123.26 54561.00 00:14:55.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9700.39 37.89 13202.13 2133.80 56463.19 00:14:55.934 ======================================================== 00:14:55.934 Total : 19279.59 75.31 13282.57 2123.26 56463.19 00:14:55.934 00:14:55.934 14:34:04 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:55.934 14:34:04 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f89f15f3-fa42-49f9-b23f-270fab7f55ed 00:14:56.191 14:34:04 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1249ecb-c954-44d8-8d96-89c816664c2e 00:14:56.449 14:34:04 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:56.449 14:34:04 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:56.449 14:34:04 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:56.449 14:34:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:56.449 14:34:04 -- nvmf/common.sh@117 -- # sync 00:14:56.449 14:34:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.449 14:34:04 -- nvmf/common.sh@120 -- # set +e 00:14:56.449 14:34:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.449 14:34:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.449 rmmod nvme_tcp 00:14:56.449 rmmod nvme_fabrics 00:14:56.449 rmmod nvme_keyring 00:14:56.449 14:34:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.449 14:34:04 -- nvmf/common.sh@124 -- # set -e 00:14:56.449 14:34:04 -- nvmf/common.sh@125 -- # return 0 00:14:56.449 14:34:04 -- nvmf/common.sh@478 -- # '[' -n 65129 ']' 00:14:56.449 14:34:04 -- nvmf/common.sh@479 -- # killprocess 65129 00:14:56.449 14:34:04 -- common/autotest_common.sh@936 -- # '[' -z 65129 ']' 00:14:56.449 14:34:04 -- common/autotest_common.sh@940 -- # kill -0 65129 00:14:56.449 14:34:04 -- common/autotest_common.sh@941 -- # uname 00:14:56.449 14:34:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.449 14:34:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65129 00:14:56.449 killing process with pid 65129 00:14:56.449 14:34:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:56.449 14:34:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:56.449 14:34:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65129' 00:14:56.449 14:34:04 -- common/autotest_common.sh@955 -- # kill 65129 00:14:56.449 14:34:04 -- common/autotest_common.sh@960 -- # wait 65129 00:14:56.707 14:34:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:56.707 14:34:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:56.707 14:34:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:56.707 14:34:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.707 14:34:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.707 14:34:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.707 14:34:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.707 14:34:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.707 14:34:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:56.707 ************************************ 00:14:56.707 END TEST nvmf_lvol 00:14:56.707 ************************************ 00:14:56.707 00:14:56.707 real 0m15.768s 00:14:56.707 user 1m5.015s 00:14:56.707 sys 0m4.902s 00:14:56.707 14:34:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:56.707 14:34:05 -- common/autotest_common.sh@10 -- # set +x 00:14:56.707 14:34:05 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:56.707 14:34:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:56.707 14:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.707 14:34:05 -- common/autotest_common.sh@10 -- # set +x 00:14:56.966 ************************************ 00:14:56.966 START TEST nvmf_lvs_grow 00:14:56.966 ************************************ 00:14:56.966 14:34:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:56.966 * Looking for test storage... 00:14:56.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.966 14:34:05 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.966 14:34:05 -- nvmf/common.sh@7 -- # uname -s 00:14:56.966 14:34:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.966 14:34:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.966 14:34:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.966 14:34:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.966 14:34:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.966 14:34:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.966 14:34:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.966 14:34:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.966 14:34:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.966 14:34:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.966 14:34:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:14:56.966 14:34:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:14:56.966 14:34:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.966 14:34:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.966 14:34:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.966 14:34:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.966 14:34:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.966 14:34:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.966 14:34:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.966 14:34:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.966 14:34:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.966 14:34:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.966 14:34:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.966 14:34:05 -- paths/export.sh@5 -- # export PATH 00:14:56.966 14:34:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.966 14:34:05 -- nvmf/common.sh@47 -- # : 0 00:14:56.966 14:34:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.966 14:34:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.966 14:34:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.966 14:34:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.966 14:34:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.966 14:34:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.966 14:34:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.966 14:34:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.966 14:34:05 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.966 14:34:05 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.966 14:34:05 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:56.966 14:34:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:56.966 14:34:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.966 14:34:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:56.967 14:34:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:56.967 14:34:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:56.967 14:34:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.967 14:34:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.967 14:34:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.967 14:34:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:56.967 14:34:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:56.967 14:34:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:56.967 14:34:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:56.967 14:34:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:56.967 14:34:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:56.967 14:34:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.967 14:34:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.967 14:34:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.967 14:34:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:56.967 14:34:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.967 14:34:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.967 14:34:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.967 14:34:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.967 14:34:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.967 14:34:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.967 14:34:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.967 14:34:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.967 14:34:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:56.967 14:34:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:56.967 Cannot find device "nvmf_tgt_br" 00:14:56.967 14:34:05 -- nvmf/common.sh@155 -- # true 00:14:56.967 14:34:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.967 Cannot find device "nvmf_tgt_br2" 00:14:56.967 14:34:05 -- nvmf/common.sh@156 -- # true 00:14:56.967 14:34:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:56.967 14:34:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:56.967 Cannot find device "nvmf_tgt_br" 00:14:56.967 14:34:05 -- nvmf/common.sh@158 -- # true 00:14:56.967 14:34:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:56.967 Cannot find device "nvmf_tgt_br2" 00:14:56.967 14:34:05 -- nvmf/common.sh@159 -- # true 00:14:56.967 14:34:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:57.230 14:34:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:57.230 14:34:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.230 14:34:05 -- nvmf/common.sh@162 -- # true 00:14:57.230 14:34:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.230 14:34:05 -- nvmf/common.sh@163 -- # true 00:14:57.230 14:34:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.230 14:34:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.230 14:34:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.230 14:34:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.230 14:34:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.230 14:34:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.230 14:34:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.230 14:34:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.230 14:34:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.230 14:34:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:57.230 14:34:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:57.230 14:34:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:57.230 14:34:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:57.230 14:34:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.230 14:34:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.230 14:34:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.230 14:34:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:57.230 14:34:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:57.230 14:34:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.230 14:34:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.230 14:34:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.230 14:34:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.230 14:34:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.230 14:34:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:57.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:14:57.230 00:14:57.230 --- 10.0.0.2 ping statistics --- 00:14:57.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.230 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:14:57.230 14:34:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:57.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:14:57.230 00:14:57.230 --- 10.0.0.3 ping statistics --- 00:14:57.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.230 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:57.230 14:34:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:14:57.230 00:14:57.230 --- 10.0.0.1 ping statistics --- 00:14:57.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.230 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:57.230 14:34:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.230 14:34:05 -- nvmf/common.sh@422 -- # return 0 00:14:57.230 14:34:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:57.230 14:34:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.230 14:34:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:57.230 14:34:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:57.230 14:34:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.230 14:34:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:57.230 14:34:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:57.492 14:34:05 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:57.492 14:34:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:57.492 14:34:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:57.492 14:34:05 -- common/autotest_common.sh@10 -- # set +x 00:14:57.492 14:34:05 -- nvmf/common.sh@470 -- # nvmfpid=65531 00:14:57.492 14:34:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:57.492 14:34:05 -- nvmf/common.sh@471 -- # waitforlisten 65531 00:14:57.492 14:34:05 -- common/autotest_common.sh@817 -- # '[' -z 65531 ']' 00:14:57.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.492 14:34:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.492 14:34:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.492 14:34:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.492 14:34:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.492 14:34:05 -- common/autotest_common.sh@10 -- # set +x 00:14:57.492 [2024-04-17 14:34:05.897163] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:14:57.492 [2024-04-17 14:34:05.897440] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.492 [2024-04-17 14:34:06.036793] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.752 [2024-04-17 14:34:06.095487] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.752 [2024-04-17 14:34:06.095538] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.752 [2024-04-17 14:34:06.095555] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.752 [2024-04-17 14:34:06.095569] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.752 [2024-04-17 14:34:06.095581] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.752 [2024-04-17 14:34:06.095617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.322 14:34:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.322 14:34:06 -- common/autotest_common.sh@850 -- # return 0 00:14:58.322 14:34:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:58.322 14:34:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:58.322 14:34:06 -- common/autotest_common.sh@10 -- # set +x 00:14:58.322 14:34:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.322 14:34:06 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:58.635 [2024-04-17 14:34:07.183256] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.635 14:34:07 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:58.635 14:34:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.635 14:34:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.635 14:34:07 -- common/autotest_common.sh@10 -- # set +x 00:14:58.893 ************************************ 00:14:58.893 START TEST lvs_grow_clean 00:14:58.893 ************************************ 00:14:58.893 14:34:07 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:58.893 14:34:07 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:59.152 14:34:07 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:59.152 14:34:07 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:59.410 14:34:07 -- target/nvmf_lvs_grow.sh@28 -- # lvs=156e96b3-2779-4e5d-96f1-793beae80cb1 00:14:59.410 14:34:07 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:59.410 14:34:07 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:14:59.669 14:34:08 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:59.669 14:34:08 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:59.669 14:34:08 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 156e96b3-2779-4e5d-96f1-793beae80cb1 lvol 150 00:14:59.927 14:34:08 -- target/nvmf_lvs_grow.sh@33 -- # lvol=557bfac4-b7ad-4b6e-8862-537e31214049 00:14:59.927 14:34:08 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:59.927 14:34:08 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:00.186 [2024-04-17 14:34:08.757912] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:00.186 [2024-04-17 14:34:08.758021] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:00.186 true 00:15:00.186 14:34:08 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:00.186 14:34:08 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:00.753 14:34:09 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:00.753 14:34:09 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:00.753 14:34:09 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 557bfac4-b7ad-4b6e-8862-537e31214049 00:15:01.320 14:34:09 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:01.320 [2024-04-17 14:34:09.894753] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.320 14:34:09 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.888 14:34:10 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65629 00:15:01.888 14:34:10 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:01.888 14:34:10 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.888 14:34:10 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65629 /var/tmp/bdevperf.sock 00:15:01.888 14:34:10 -- common/autotest_common.sh@817 -- # '[' -z 65629 ']' 00:15:01.888 14:34:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.888 14:34:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.888 14:34:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.888 14:34:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.888 14:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:01.888 [2024-04-17 14:34:10.262976] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:01.888 [2024-04-17 14:34:10.263344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65629 ] 00:15:01.888 [2024-04-17 14:34:10.401242] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.888 [2024-04-17 14:34:10.459188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.833 14:34:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.833 14:34:11 -- common/autotest_common.sh@850 -- # return 0 00:15:02.833 14:34:11 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:03.113 Nvme0n1 00:15:03.113 14:34:11 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:03.371 [ 00:15:03.371 { 00:15:03.371 "name": "Nvme0n1", 00:15:03.371 "aliases": [ 00:15:03.371 "557bfac4-b7ad-4b6e-8862-537e31214049" 00:15:03.371 ], 00:15:03.371 "product_name": "NVMe disk", 00:15:03.371 "block_size": 4096, 00:15:03.371 "num_blocks": 38912, 00:15:03.371 "uuid": "557bfac4-b7ad-4b6e-8862-537e31214049", 00:15:03.371 "assigned_rate_limits": { 00:15:03.371 "rw_ios_per_sec": 0, 00:15:03.371 "rw_mbytes_per_sec": 0, 00:15:03.371 "r_mbytes_per_sec": 0, 00:15:03.371 "w_mbytes_per_sec": 0 00:15:03.371 }, 00:15:03.371 "claimed": false, 00:15:03.371 "zoned": false, 00:15:03.371 "supported_io_types": { 00:15:03.371 "read": true, 00:15:03.371 "write": true, 00:15:03.371 "unmap": true, 00:15:03.371 "write_zeroes": true, 00:15:03.371 "flush": true, 00:15:03.371 "reset": true, 00:15:03.371 "compare": true, 00:15:03.371 "compare_and_write": true, 00:15:03.371 "abort": true, 00:15:03.371 "nvme_admin": true, 00:15:03.371 "nvme_io": true 00:15:03.371 }, 00:15:03.371 "memory_domains": [ 00:15:03.371 { 00:15:03.371 "dma_device_id": "system", 00:15:03.371 "dma_device_type": 1 00:15:03.371 } 00:15:03.371 ], 00:15:03.371 "driver_specific": { 00:15:03.371 "nvme": [ 00:15:03.371 { 00:15:03.371 "trid": { 00:15:03.371 "trtype": "TCP", 00:15:03.371 "adrfam": "IPv4", 00:15:03.371 "traddr": "10.0.0.2", 00:15:03.371 "trsvcid": "4420", 00:15:03.371 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:03.371 }, 00:15:03.371 "ctrlr_data": { 00:15:03.371 "cntlid": 1, 00:15:03.371 "vendor_id": "0x8086", 00:15:03.371 "model_number": "SPDK bdev Controller", 00:15:03.371 "serial_number": "SPDK0", 00:15:03.371 "firmware_revision": "24.05", 00:15:03.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:03.371 "oacs": { 00:15:03.371 "security": 0, 00:15:03.371 "format": 0, 00:15:03.371 "firmware": 0, 00:15:03.371 "ns_manage": 0 00:15:03.371 }, 00:15:03.371 "multi_ctrlr": true, 00:15:03.371 "ana_reporting": false 00:15:03.372 }, 00:15:03.372 "vs": { 00:15:03.372 "nvme_version": "1.3" 00:15:03.372 }, 00:15:03.372 "ns_data": { 00:15:03.372 "id": 1, 00:15:03.372 "can_share": true 00:15:03.372 } 00:15:03.372 } 00:15:03.372 ], 00:15:03.372 "mp_policy": "active_passive" 00:15:03.372 } 00:15:03.372 } 00:15:03.372 ] 00:15:03.372 14:34:11 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65652 00:15:03.372 14:34:11 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:03.372 14:34:11 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:03.372 Running I/O for 10 seconds... 00:15:04.305 Latency(us) 00:15:04.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.305 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:15:04.305 =================================================================================================================== 00:15:04.305 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:15:04.305 00:15:05.240 14:34:13 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:05.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.499 Nvme0n1 : 2.00 7127.00 27.84 0.00 0.00 0.00 0.00 0.00 00:15:05.499 =================================================================================================================== 00:15:05.499 Total : 7127.00 27.84 0.00 0.00 0.00 0.00 0.00 00:15:05.499 00:15:05.757 true 00:15:05.757 14:34:14 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:05.757 14:34:14 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:06.016 14:34:14 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:06.016 14:34:14 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:06.016 14:34:14 -- target/nvmf_lvs_grow.sh@65 -- # wait 65652 00:15:06.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.309 Nvme0n1 : 3.00 6364.67 24.86 0.00 0.00 0.00 0.00 0.00 00:15:06.309 =================================================================================================================== 00:15:06.309 Total : 6364.67 24.86 0.00 0.00 0.00 0.00 0.00 00:15:06.309 00:15:07.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.685 Nvme0n1 : 4.00 6424.25 25.09 0.00 0.00 0.00 0.00 0.00 00:15:07.685 =================================================================================================================== 00:15:07.685 Total : 6424.25 25.09 0.00 0.00 0.00 0.00 0.00 00:15:07.685 00:15:08.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.622 Nvme0n1 : 5.00 6511.00 25.43 0.00 0.00 0.00 0.00 0.00 00:15:08.622 =================================================================================================================== 00:15:08.622 Total : 6511.00 25.43 0.00 0.00 0.00 0.00 0.00 00:15:08.622 00:15:09.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.562 Nvme0n1 : 6.00 6568.83 25.66 0.00 0.00 0.00 0.00 0.00 00:15:09.562 =================================================================================================================== 00:15:09.562 Total : 6568.83 25.66 0.00 0.00 0.00 0.00 0.00 00:15:09.562 00:15:10.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.497 Nvme0n1 : 7.00 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:15:10.497 =================================================================================================================== 00:15:10.498 Total : 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:15:10.498 00:15:11.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.435 Nvme0n1 : 8.00 6577.62 25.69 0.00 0.00 0.00 0.00 0.00 00:15:11.435 =================================================================================================================== 00:15:11.435 Total : 6577.62 25.69 0.00 0.00 0.00 0.00 0.00 00:15:11.435 00:15:12.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.413 Nvme0n1 : 9.00 6495.89 25.37 0.00 0.00 0.00 0.00 0.00 00:15:12.413 =================================================================================================================== 00:15:12.413 Total : 6495.89 25.37 0.00 0.00 0.00 0.00 0.00 00:15:12.413 00:15:13.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.348 Nvme0n1 : 10.00 6417.80 25.07 0.00 0.00 0.00 0.00 0.00 00:15:13.348 =================================================================================================================== 00:15:13.348 Total : 6417.80 25.07 0.00 0.00 0.00 0.00 0.00 00:15:13.348 00:15:13.348 00:15:13.348 Latency(us) 00:15:13.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.348 Nvme0n1 : 10.02 6415.61 25.06 0.00 0.00 19945.44 4438.57 371767.85 00:15:13.348 =================================================================================================================== 00:15:13.348 Total : 6415.61 25.06 0.00 0.00 19945.44 4438.57 371767.85 00:15:13.348 0 00:15:13.348 14:34:21 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65629 00:15:13.348 14:34:21 -- common/autotest_common.sh@936 -- # '[' -z 65629 ']' 00:15:13.348 14:34:21 -- common/autotest_common.sh@940 -- # kill -0 65629 00:15:13.348 14:34:21 -- common/autotest_common.sh@941 -- # uname 00:15:13.348 14:34:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:13.348 14:34:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65629 00:15:13.607 14:34:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:13.607 14:34:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:13.607 14:34:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65629' 00:15:13.607 killing process with pid 65629 00:15:13.607 14:34:21 -- common/autotest_common.sh@955 -- # kill 65629 00:15:13.607 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.607 00:15:13.607 Latency(us) 00:15:13.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.607 =================================================================================================================== 00:15:13.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.607 14:34:21 -- common/autotest_common.sh@960 -- # wait 65629 00:15:13.607 14:34:22 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:14.174 14:34:22 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:14.174 14:34:22 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:14.433 14:34:22 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:14.433 14:34:22 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:14.433 14:34:22 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:14.692 [2024-04-17 14:34:23.042647] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:14.692 14:34:23 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:14.692 14:34:23 -- common/autotest_common.sh@638 -- # local es=0 00:15:14.692 14:34:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:14.692 14:34:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.692 14:34:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:14.692 14:34:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.692 14:34:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:14.692 14:34:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.692 14:34:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:14.692 14:34:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.692 14:34:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:14.692 14:34:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:14.950 request: 00:15:14.950 { 00:15:14.950 "uuid": "156e96b3-2779-4e5d-96f1-793beae80cb1", 00:15:14.950 "method": "bdev_lvol_get_lvstores", 00:15:14.950 "req_id": 1 00:15:14.950 } 00:15:14.950 Got JSON-RPC error response 00:15:14.950 response: 00:15:14.950 { 00:15:14.950 "code": -19, 00:15:14.950 "message": "No such device" 00:15:14.950 } 00:15:14.950 14:34:23 -- common/autotest_common.sh@641 -- # es=1 00:15:14.950 14:34:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:14.950 14:34:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:14.950 14:34:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:14.951 14:34:23 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:15.209 aio_bdev 00:15:15.209 14:34:23 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 557bfac4-b7ad-4b6e-8862-537e31214049 00:15:15.209 14:34:23 -- common/autotest_common.sh@885 -- # local bdev_name=557bfac4-b7ad-4b6e-8862-537e31214049 00:15:15.209 14:34:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:15.209 14:34:23 -- common/autotest_common.sh@887 -- # local i 00:15:15.209 14:34:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:15.209 14:34:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:15.209 14:34:23 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:15.467 14:34:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 557bfac4-b7ad-4b6e-8862-537e31214049 -t 2000 00:15:15.725 [ 00:15:15.725 { 00:15:15.725 "name": "557bfac4-b7ad-4b6e-8862-537e31214049", 00:15:15.725 "aliases": [ 00:15:15.725 "lvs/lvol" 00:15:15.725 ], 00:15:15.725 "product_name": "Logical Volume", 00:15:15.725 "block_size": 4096, 00:15:15.725 "num_blocks": 38912, 00:15:15.725 "uuid": "557bfac4-b7ad-4b6e-8862-537e31214049", 00:15:15.725 "assigned_rate_limits": { 00:15:15.725 "rw_ios_per_sec": 0, 00:15:15.725 "rw_mbytes_per_sec": 0, 00:15:15.725 "r_mbytes_per_sec": 0, 00:15:15.725 "w_mbytes_per_sec": 0 00:15:15.725 }, 00:15:15.725 "claimed": false, 00:15:15.725 "zoned": false, 00:15:15.725 "supported_io_types": { 00:15:15.725 "read": true, 00:15:15.725 "write": true, 00:15:15.725 "unmap": true, 00:15:15.725 "write_zeroes": true, 00:15:15.725 "flush": false, 00:15:15.725 "reset": true, 00:15:15.725 "compare": false, 00:15:15.725 "compare_and_write": false, 00:15:15.725 "abort": false, 00:15:15.725 "nvme_admin": false, 00:15:15.725 "nvme_io": false 00:15:15.725 }, 00:15:15.725 "driver_specific": { 00:15:15.725 "lvol": { 00:15:15.725 "lvol_store_uuid": "156e96b3-2779-4e5d-96f1-793beae80cb1", 00:15:15.725 "base_bdev": "aio_bdev", 00:15:15.725 "thin_provision": false, 00:15:15.725 "snapshot": false, 00:15:15.725 "clone": false, 00:15:15.725 "esnap_clone": false 00:15:15.725 } 00:15:15.725 } 00:15:15.725 } 00:15:15.725 ] 00:15:15.725 14:34:24 -- common/autotest_common.sh@893 -- # return 0 00:15:15.725 14:34:24 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:15.725 14:34:24 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:16.291 14:34:24 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:16.291 14:34:24 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:16.291 14:34:24 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:16.549 14:34:24 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:16.549 14:34:24 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 557bfac4-b7ad-4b6e-8862-537e31214049 00:15:16.807 14:34:25 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 156e96b3-2779-4e5d-96f1-793beae80cb1 00:15:17.065 14:34:25 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:17.324 14:34:25 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:17.891 ************************************ 00:15:17.891 END TEST lvs_grow_clean 00:15:17.891 ************************************ 00:15:17.891 00:15:17.891 real 0m19.000s 00:15:17.891 user 0m17.944s 00:15:17.891 sys 0m2.505s 00:15:17.891 14:34:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:17.891 14:34:26 -- common/autotest_common.sh@10 -- # set +x 00:15:17.891 14:34:26 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:17.892 14:34:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:17.892 14:34:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.892 14:34:26 -- common/autotest_common.sh@10 -- # set +x 00:15:17.892 ************************************ 00:15:17.892 START TEST lvs_grow_dirty 00:15:17.892 ************************************ 00:15:17.892 14:34:26 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:17.892 14:34:26 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:18.150 14:34:26 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:18.150 14:34:26 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:18.718 14:34:27 -- target/nvmf_lvs_grow.sh@28 -- # lvs=367df87d-1fd4-42ac-be7a-3a516d837190 00:15:18.718 14:34:27 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:18.718 14:34:27 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:18.977 14:34:27 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:18.977 14:34:27 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:18.977 14:34:27 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 367df87d-1fd4-42ac-be7a-3a516d837190 lvol 150 00:15:19.235 14:34:27 -- target/nvmf_lvs_grow.sh@33 -- # lvol=707b1e9a-af07-450b-96eb-6319805aa8cc 00:15:19.235 14:34:27 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:19.235 14:34:27 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:19.494 [2024-04-17 14:34:27.937357] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:19.494 [2024-04-17 14:34:27.937460] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:19.494 true 00:15:19.494 14:34:27 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:19.494 14:34:27 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:19.753 14:34:28 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:19.753 14:34:28 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:20.320 14:34:28 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 707b1e9a-af07-450b-96eb-6319805aa8cc 00:15:20.320 14:34:28 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:20.886 14:34:29 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:21.144 14:34:29 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65914 00:15:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.144 14:34:29 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:21.144 14:34:29 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:21.144 14:34:29 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65914 /var/tmp/bdevperf.sock 00:15:21.144 14:34:29 -- common/autotest_common.sh@817 -- # '[' -z 65914 ']' 00:15:21.144 14:34:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.144 14:34:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:21.144 14:34:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.144 14:34:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:21.144 14:34:29 -- common/autotest_common.sh@10 -- # set +x 00:15:21.144 [2024-04-17 14:34:29.550756] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:21.144 [2024-04-17 14:34:29.550862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65914 ] 00:15:21.144 [2024-04-17 14:34:29.695363] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.437 [2024-04-17 14:34:29.769524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.437 14:34:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:21.437 14:34:29 -- common/autotest_common.sh@850 -- # return 0 00:15:21.437 14:34:29 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:21.714 Nvme0n1 00:15:21.714 14:34:30 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:21.972 [ 00:15:21.972 { 00:15:21.972 "name": "Nvme0n1", 00:15:21.972 "aliases": [ 00:15:21.972 "707b1e9a-af07-450b-96eb-6319805aa8cc" 00:15:21.972 ], 00:15:21.972 "product_name": "NVMe disk", 00:15:21.972 "block_size": 4096, 00:15:21.972 "num_blocks": 38912, 00:15:21.972 "uuid": "707b1e9a-af07-450b-96eb-6319805aa8cc", 00:15:21.972 "assigned_rate_limits": { 00:15:21.972 "rw_ios_per_sec": 0, 00:15:21.972 "rw_mbytes_per_sec": 0, 00:15:21.972 "r_mbytes_per_sec": 0, 00:15:21.972 "w_mbytes_per_sec": 0 00:15:21.972 }, 00:15:21.972 "claimed": false, 00:15:21.972 "zoned": false, 00:15:21.972 "supported_io_types": { 00:15:21.972 "read": true, 00:15:21.972 "write": true, 00:15:21.972 "unmap": true, 00:15:21.972 "write_zeroes": true, 00:15:21.972 "flush": true, 00:15:21.972 "reset": true, 00:15:21.972 "compare": true, 00:15:21.972 "compare_and_write": true, 00:15:21.972 "abort": true, 00:15:21.972 "nvme_admin": true, 00:15:21.972 "nvme_io": true 00:15:21.972 }, 00:15:21.972 "memory_domains": [ 00:15:21.972 { 00:15:21.972 "dma_device_id": "system", 00:15:21.972 "dma_device_type": 1 00:15:21.972 } 00:15:21.972 ], 00:15:21.972 "driver_specific": { 00:15:21.972 "nvme": [ 00:15:21.972 { 00:15:21.972 "trid": { 00:15:21.972 "trtype": "TCP", 00:15:21.972 "adrfam": "IPv4", 00:15:21.972 "traddr": "10.0.0.2", 00:15:21.972 "trsvcid": "4420", 00:15:21.972 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:21.972 }, 00:15:21.972 "ctrlr_data": { 00:15:21.972 "cntlid": 1, 00:15:21.972 "vendor_id": "0x8086", 00:15:21.972 "model_number": "SPDK bdev Controller", 00:15:21.972 "serial_number": "SPDK0", 00:15:21.972 "firmware_revision": "24.05", 00:15:21.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:21.972 "oacs": { 00:15:21.972 "security": 0, 00:15:21.972 "format": 0, 00:15:21.972 "firmware": 0, 00:15:21.972 "ns_manage": 0 00:15:21.972 }, 00:15:21.972 "multi_ctrlr": true, 00:15:21.972 "ana_reporting": false 00:15:21.972 }, 00:15:21.972 "vs": { 00:15:21.972 "nvme_version": "1.3" 00:15:21.972 }, 00:15:21.972 "ns_data": { 00:15:21.972 "id": 1, 00:15:21.972 "can_share": true 00:15:21.972 } 00:15:21.972 } 00:15:21.972 ], 00:15:21.972 "mp_policy": "active_passive" 00:15:21.972 } 00:15:21.972 } 00:15:21.972 ] 00:15:21.972 14:34:30 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65934 00:15:21.972 14:34:30 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:21.972 14:34:30 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:22.231 Running I/O for 10 seconds... 00:15:23.167 Latency(us) 00:15:23.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.167 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:15:23.167 =================================================================================================================== 00:15:23.167 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:15:23.167 00:15:24.102 14:34:32 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:24.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.102 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:15:24.102 =================================================================================================================== 00:15:24.102 Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:15:24.102 00:15:24.361 true 00:15:24.361 14:34:32 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:24.361 14:34:32 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:24.619 14:34:33 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:24.619 14:34:33 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:24.619 14:34:33 -- target/nvmf_lvs_grow.sh@65 -- # wait 65934 00:15:25.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.185 Nvme0n1 : 3.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:15:25.185 =================================================================================================================== 00:15:25.185 Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:15:25.185 00:15:26.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.121 Nvme0n1 : 4.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:15:26.121 =================================================================================================================== 00:15:26.121 Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:15:26.121 00:15:27.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.503 Nvme0n1 : 5.00 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:15:27.503 =================================================================================================================== 00:15:27.503 Total : 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:15:27.503 00:15:28.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.438 Nvme0n1 : 6.00 6547.67 25.58 0.00 0.00 0.00 0.00 0.00 00:15:28.438 =================================================================================================================== 00:15:28.438 Total : 6547.67 25.58 0.00 0.00 0.00 0.00 0.00 00:15:28.438 00:15:29.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:29.373 Nvme0n1 : 7.00 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:15:29.373 =================================================================================================================== 00:15:29.373 Total : 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:15:29.373 00:15:30.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:30.341 Nvme0n1 : 8.00 6641.12 25.94 0.00 0.00 0.00 0.00 0.00 00:15:30.341 =================================================================================================================== 00:15:30.341 Total : 6641.12 25.94 0.00 0.00 0.00 0.00 0.00 00:15:30.341 00:15:31.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.303 Nvme0n1 : 9.00 6679.33 26.09 0.00 0.00 0.00 0.00 0.00 00:15:31.303 =================================================================================================================== 00:15:31.303 Total : 6679.33 26.09 0.00 0.00 0.00 0.00 0.00 00:15:31.303 00:15:32.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.239 Nvme0n1 : 10.00 6722.60 26.26 0.00 0.00 0.00 0.00 0.00 00:15:32.239 =================================================================================================================== 00:15:32.239 Total : 6722.60 26.26 0.00 0.00 0.00 0.00 0.00 00:15:32.239 00:15:32.239 00:15:32.239 Latency(us) 00:15:32.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.239 Nvme0n1 : 10.02 6722.74 26.26 0.00 0.00 19032.49 13345.51 276442.76 00:15:32.239 =================================================================================================================== 00:15:32.239 Total : 6722.74 26.26 0.00 0.00 19032.49 13345.51 276442.76 00:15:32.239 0 00:15:32.239 14:34:40 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65914 00:15:32.239 14:34:40 -- common/autotest_common.sh@936 -- # '[' -z 65914 ']' 00:15:32.239 14:34:40 -- common/autotest_common.sh@940 -- # kill -0 65914 00:15:32.239 14:34:40 -- common/autotest_common.sh@941 -- # uname 00:15:32.239 14:34:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.239 14:34:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65914 00:15:32.239 killing process with pid 65914 00:15:32.239 Received shutdown signal, test time was about 10.000000 seconds 00:15:32.239 00:15:32.239 Latency(us) 00:15:32.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.239 =================================================================================================================== 00:15:32.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.239 14:34:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:32.239 14:34:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:32.239 14:34:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65914' 00:15:32.239 14:34:40 -- common/autotest_common.sh@955 -- # kill 65914 00:15:32.239 14:34:40 -- common/autotest_common.sh@960 -- # wait 65914 00:15:32.498 14:34:40 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:32.756 14:34:41 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:32.756 14:34:41 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:33.015 14:34:41 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:33.015 14:34:41 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:33.015 14:34:41 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 65531 00:15:33.015 14:34:41 -- target/nvmf_lvs_grow.sh@74 -- # wait 65531 00:15:33.015 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 65531 Killed "${NVMF_APP[@]}" "$@" 00:15:33.015 14:34:41 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:33.015 14:34:41 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:33.015 14:34:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:33.015 14:34:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:33.015 14:34:41 -- common/autotest_common.sh@10 -- # set +x 00:15:33.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.015 14:34:41 -- nvmf/common.sh@470 -- # nvmfpid=66060 00:15:33.015 14:34:41 -- nvmf/common.sh@471 -- # waitforlisten 66060 00:15:33.015 14:34:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:33.015 14:34:41 -- common/autotest_common.sh@817 -- # '[' -z 66060 ']' 00:15:33.015 14:34:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.015 14:34:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:33.015 14:34:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.015 14:34:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:33.015 14:34:41 -- common/autotest_common.sh@10 -- # set +x 00:15:33.274 [2024-04-17 14:34:41.646147] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:33.274 [2024-04-17 14:34:41.646267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.274 [2024-04-17 14:34:41.797605] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.274 [2024-04-17 14:34:41.855334] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.274 [2024-04-17 14:34:41.855394] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.274 [2024-04-17 14:34:41.855406] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.274 [2024-04-17 14:34:41.855414] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.274 [2024-04-17 14:34:41.855422] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.274 [2024-04-17 14:34:41.855454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.211 14:34:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:34.211 14:34:42 -- common/autotest_common.sh@850 -- # return 0 00:15:34.211 14:34:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:34.211 14:34:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:34.211 14:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:34.211 14:34:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.211 14:34:42 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:34.500 [2024-04-17 14:34:43.000088] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:34.500 [2024-04-17 14:34:43.000352] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:34.500 [2024-04-17 14:34:43.000521] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:34.500 14:34:43 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:34.500 14:34:43 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 707b1e9a-af07-450b-96eb-6319805aa8cc 00:15:34.500 14:34:43 -- common/autotest_common.sh@885 -- # local bdev_name=707b1e9a-af07-450b-96eb-6319805aa8cc 00:15:34.500 14:34:43 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:34.500 14:34:43 -- common/autotest_common.sh@887 -- # local i 00:15:34.500 14:34:43 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:34.500 14:34:43 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:34.500 14:34:43 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:34.794 14:34:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 707b1e9a-af07-450b-96eb-6319805aa8cc -t 2000 00:15:35.053 [ 00:15:35.053 { 00:15:35.053 "name": "707b1e9a-af07-450b-96eb-6319805aa8cc", 00:15:35.053 "aliases": [ 00:15:35.053 "lvs/lvol" 00:15:35.053 ], 00:15:35.053 "product_name": "Logical Volume", 00:15:35.053 "block_size": 4096, 00:15:35.053 "num_blocks": 38912, 00:15:35.053 "uuid": "707b1e9a-af07-450b-96eb-6319805aa8cc", 00:15:35.053 "assigned_rate_limits": { 00:15:35.053 "rw_ios_per_sec": 0, 00:15:35.053 "rw_mbytes_per_sec": 0, 00:15:35.053 "r_mbytes_per_sec": 0, 00:15:35.053 "w_mbytes_per_sec": 0 00:15:35.053 }, 00:15:35.053 "claimed": false, 00:15:35.053 "zoned": false, 00:15:35.053 "supported_io_types": { 00:15:35.053 "read": true, 00:15:35.053 "write": true, 00:15:35.053 "unmap": true, 00:15:35.053 "write_zeroes": true, 00:15:35.053 "flush": false, 00:15:35.053 "reset": true, 00:15:35.053 "compare": false, 00:15:35.053 "compare_and_write": false, 00:15:35.053 "abort": false, 00:15:35.053 "nvme_admin": false, 00:15:35.053 "nvme_io": false 00:15:35.053 }, 00:15:35.053 "driver_specific": { 00:15:35.053 "lvol": { 00:15:35.053 "lvol_store_uuid": "367df87d-1fd4-42ac-be7a-3a516d837190", 00:15:35.053 "base_bdev": "aio_bdev", 00:15:35.053 "thin_provision": false, 00:15:35.053 "snapshot": false, 00:15:35.053 "clone": false, 00:15:35.053 "esnap_clone": false 00:15:35.053 } 00:15:35.053 } 00:15:35.053 } 00:15:35.054 ] 00:15:35.054 14:34:43 -- common/autotest_common.sh@893 -- # return 0 00:15:35.054 14:34:43 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:35.054 14:34:43 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:35.312 14:34:43 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:35.312 14:34:43 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:35.312 14:34:43 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:35.572 14:34:44 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:35.572 14:34:44 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:35.831 [2024-04-17 14:34:44.413706] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:36.090 14:34:44 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:36.090 14:34:44 -- common/autotest_common.sh@638 -- # local es=0 00:15:36.090 14:34:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:36.090 14:34:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.090 14:34:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:36.090 14:34:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.090 14:34:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:36.090 14:34:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.090 14:34:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:36.090 14:34:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.090 14:34:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:36.090 14:34:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:36.349 request: 00:15:36.349 { 00:15:36.349 "uuid": "367df87d-1fd4-42ac-be7a-3a516d837190", 00:15:36.349 "method": "bdev_lvol_get_lvstores", 00:15:36.349 "req_id": 1 00:15:36.349 } 00:15:36.349 Got JSON-RPC error response 00:15:36.349 response: 00:15:36.349 { 00:15:36.349 "code": -19, 00:15:36.349 "message": "No such device" 00:15:36.349 } 00:15:36.349 14:34:44 -- common/autotest_common.sh@641 -- # es=1 00:15:36.349 14:34:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:36.349 14:34:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:36.349 14:34:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:36.349 14:34:44 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:36.607 aio_bdev 00:15:36.607 14:34:45 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 707b1e9a-af07-450b-96eb-6319805aa8cc 00:15:36.607 14:34:45 -- common/autotest_common.sh@885 -- # local bdev_name=707b1e9a-af07-450b-96eb-6319805aa8cc 00:15:36.607 14:34:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:36.607 14:34:45 -- common/autotest_common.sh@887 -- # local i 00:15:36.607 14:34:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:36.607 14:34:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:36.607 14:34:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:36.865 14:34:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 707b1e9a-af07-450b-96eb-6319805aa8cc -t 2000 00:15:37.123 [ 00:15:37.123 { 00:15:37.123 "name": "707b1e9a-af07-450b-96eb-6319805aa8cc", 00:15:37.123 "aliases": [ 00:15:37.123 "lvs/lvol" 00:15:37.123 ], 00:15:37.123 "product_name": "Logical Volume", 00:15:37.123 "block_size": 4096, 00:15:37.123 "num_blocks": 38912, 00:15:37.123 "uuid": "707b1e9a-af07-450b-96eb-6319805aa8cc", 00:15:37.123 "assigned_rate_limits": { 00:15:37.123 "rw_ios_per_sec": 0, 00:15:37.123 "rw_mbytes_per_sec": 0, 00:15:37.123 "r_mbytes_per_sec": 0, 00:15:37.123 "w_mbytes_per_sec": 0 00:15:37.123 }, 00:15:37.123 "claimed": false, 00:15:37.123 "zoned": false, 00:15:37.123 "supported_io_types": { 00:15:37.123 "read": true, 00:15:37.123 "write": true, 00:15:37.123 "unmap": true, 00:15:37.123 "write_zeroes": true, 00:15:37.123 "flush": false, 00:15:37.123 "reset": true, 00:15:37.123 "compare": false, 00:15:37.123 "compare_and_write": false, 00:15:37.123 "abort": false, 00:15:37.123 "nvme_admin": false, 00:15:37.123 "nvme_io": false 00:15:37.123 }, 00:15:37.123 "driver_specific": { 00:15:37.123 "lvol": { 00:15:37.123 "lvol_store_uuid": "367df87d-1fd4-42ac-be7a-3a516d837190", 00:15:37.123 "base_bdev": "aio_bdev", 00:15:37.123 "thin_provision": false, 00:15:37.123 "snapshot": false, 00:15:37.123 "clone": false, 00:15:37.123 "esnap_clone": false 00:15:37.123 } 00:15:37.123 } 00:15:37.123 } 00:15:37.123 ] 00:15:37.123 14:34:45 -- common/autotest_common.sh@893 -- # return 0 00:15:37.123 14:34:45 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:37.123 14:34:45 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:37.382 14:34:45 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:37.382 14:34:45 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:37.382 14:34:45 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:37.640 14:34:46 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:37.640 14:34:46 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 707b1e9a-af07-450b-96eb-6319805aa8cc 00:15:37.899 14:34:46 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 367df87d-1fd4-42ac-be7a-3a516d837190 00:15:38.158 14:34:46 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:38.417 14:34:46 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:38.675 ************************************ 00:15:38.675 END TEST lvs_grow_dirty 00:15:38.675 ************************************ 00:15:38.675 00:15:38.675 real 0m20.838s 00:15:38.675 user 0m44.496s 00:15:38.675 sys 0m7.882s 00:15:38.675 14:34:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.675 14:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:38.943 14:34:47 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:38.943 14:34:47 -- common/autotest_common.sh@794 -- # type=--id 00:15:38.943 14:34:47 -- common/autotest_common.sh@795 -- # id=0 00:15:38.943 14:34:47 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:38.943 14:34:47 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:38.943 14:34:47 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:38.943 14:34:47 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:38.943 14:34:47 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:38.943 14:34:47 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:38.943 nvmf_trace.0 00:15:38.943 14:34:47 -- common/autotest_common.sh@809 -- # return 0 00:15:38.943 14:34:47 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:38.943 14:34:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:38.943 14:34:47 -- nvmf/common.sh@117 -- # sync 00:15:38.943 14:34:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.943 14:34:47 -- nvmf/common.sh@120 -- # set +e 00:15:38.943 14:34:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.943 14:34:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.943 rmmod nvme_tcp 00:15:38.943 rmmod nvme_fabrics 00:15:38.943 rmmod nvme_keyring 00:15:38.943 14:34:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.943 14:34:47 -- nvmf/common.sh@124 -- # set -e 00:15:38.943 14:34:47 -- nvmf/common.sh@125 -- # return 0 00:15:38.943 14:34:47 -- nvmf/common.sh@478 -- # '[' -n 66060 ']' 00:15:38.943 14:34:47 -- nvmf/common.sh@479 -- # killprocess 66060 00:15:38.943 14:34:47 -- common/autotest_common.sh@936 -- # '[' -z 66060 ']' 00:15:38.943 14:34:47 -- common/autotest_common.sh@940 -- # kill -0 66060 00:15:38.943 14:34:47 -- common/autotest_common.sh@941 -- # uname 00:15:38.943 14:34:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.943 14:34:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66060 00:15:38.943 killing process with pid 66060 00:15:38.943 14:34:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.943 14:34:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.943 14:34:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66060' 00:15:38.943 14:34:47 -- common/autotest_common.sh@955 -- # kill 66060 00:15:38.943 14:34:47 -- common/autotest_common.sh@960 -- # wait 66060 00:15:39.229 14:34:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:39.230 14:34:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:39.230 14:34:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:39.230 14:34:47 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.230 14:34:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.230 14:34:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.230 14:34:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.230 14:34:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.230 14:34:47 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:39.230 00:15:39.230 real 0m42.368s 00:15:39.230 user 1m9.214s 00:15:39.230 sys 0m11.119s 00:15:39.230 14:34:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:39.230 14:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:39.230 ************************************ 00:15:39.230 END TEST nvmf_lvs_grow 00:15:39.230 ************************************ 00:15:39.230 14:34:47 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:39.230 14:34:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.230 14:34:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.230 14:34:47 -- common/autotest_common.sh@10 -- # set +x 00:15:39.489 ************************************ 00:15:39.489 START TEST nvmf_bdev_io_wait 00:15:39.489 ************************************ 00:15:39.489 14:34:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:39.489 * Looking for test storage... 00:15:39.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:39.489 14:34:47 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.489 14:34:47 -- nvmf/common.sh@7 -- # uname -s 00:15:39.489 14:34:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.489 14:34:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.489 14:34:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.489 14:34:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.489 14:34:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.489 14:34:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.489 14:34:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.489 14:34:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.489 14:34:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.489 14:34:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.489 14:34:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:15:39.489 14:34:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:15:39.489 14:34:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.489 14:34:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.489 14:34:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.489 14:34:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.489 14:34:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.489 14:34:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.489 14:34:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.489 14:34:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.489 14:34:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.489 14:34:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.489 14:34:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.489 14:34:47 -- paths/export.sh@5 -- # export PATH 00:15:39.489 14:34:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.489 14:34:47 -- nvmf/common.sh@47 -- # : 0 00:15:39.489 14:34:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.489 14:34:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.489 14:34:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.489 14:34:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.489 14:34:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.489 14:34:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.489 14:34:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.489 14:34:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.489 14:34:47 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.489 14:34:47 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.489 14:34:47 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:39.489 14:34:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:39.489 14:34:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.489 14:34:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:39.489 14:34:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:39.489 14:34:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:39.489 14:34:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.489 14:34:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.489 14:34:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.489 14:34:47 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:39.489 14:34:47 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:39.489 14:34:47 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:39.489 14:34:47 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:39.489 14:34:47 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:39.489 14:34:47 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:39.489 14:34:47 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.489 14:34:47 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.489 14:34:47 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.489 14:34:47 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:39.489 14:34:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.489 14:34:47 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.489 14:34:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.489 14:34:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.489 14:34:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.489 14:34:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.489 14:34:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.489 14:34:47 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.489 14:34:47 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:39.489 14:34:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:39.489 Cannot find device "nvmf_tgt_br" 00:15:39.489 14:34:47 -- nvmf/common.sh@155 -- # true 00:15:39.489 14:34:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.489 Cannot find device "nvmf_tgt_br2" 00:15:39.489 14:34:47 -- nvmf/common.sh@156 -- # true 00:15:39.489 14:34:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:39.489 14:34:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:39.489 Cannot find device "nvmf_tgt_br" 00:15:39.489 14:34:48 -- nvmf/common.sh@158 -- # true 00:15:39.489 14:34:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:39.489 Cannot find device "nvmf_tgt_br2" 00:15:39.489 14:34:48 -- nvmf/common.sh@159 -- # true 00:15:39.489 14:34:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:39.489 14:34:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:39.489 14:34:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.489 14:34:48 -- nvmf/common.sh@162 -- # true 00:15:39.489 14:34:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.489 14:34:48 -- nvmf/common.sh@163 -- # true 00:15:39.489 14:34:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.489 14:34:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.749 14:34:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.749 14:34:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.749 14:34:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.749 14:34:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.749 14:34:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.749 14:34:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.749 14:34:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.749 14:34:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:39.749 14:34:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:39.749 14:34:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:39.749 14:34:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:39.749 14:34:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.749 14:34:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.749 14:34:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.749 14:34:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:39.749 14:34:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:39.749 14:34:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.749 14:34:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.749 14:34:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.749 14:34:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.749 14:34:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.749 14:34:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:39.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:15:39.749 00:15:39.749 --- 10.0.0.2 ping statistics --- 00:15:39.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.749 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:39.749 14:34:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:39.749 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.749 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:39.749 00:15:39.749 --- 10.0.0.3 ping statistics --- 00:15:39.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.749 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:39.749 14:34:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:39.749 00:15:39.749 --- 10.0.0.1 ping statistics --- 00:15:39.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.749 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:39.749 14:34:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.749 14:34:48 -- nvmf/common.sh@422 -- # return 0 00:15:39.749 14:34:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:39.749 14:34:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.749 14:34:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:39.749 14:34:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:39.749 14:34:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.749 14:34:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:39.749 14:34:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:39.749 14:34:48 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:39.749 14:34:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:39.749 14:34:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:39.749 14:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:39.749 14:34:48 -- nvmf/common.sh@470 -- # nvmfpid=66387 00:15:39.749 14:34:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:39.749 14:34:48 -- nvmf/common.sh@471 -- # waitforlisten 66387 00:15:39.749 14:34:48 -- common/autotest_common.sh@817 -- # '[' -z 66387 ']' 00:15:39.749 14:34:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.749 14:34:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:39.749 14:34:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.749 14:34:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:39.749 14:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.008 [2024-04-17 14:34:48.359838] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:40.008 [2024-04-17 14:34:48.359941] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.008 [2024-04-17 14:34:48.501543] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.008 [2024-04-17 14:34:48.571630] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.008 [2024-04-17 14:34:48.571691] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.008 [2024-04-17 14:34:48.571705] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.008 [2024-04-17 14:34:48.571715] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.008 [2024-04-17 14:34:48.571723] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.008 [2024-04-17 14:34:48.572119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.008 [2024-04-17 14:34:48.572416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.008 [2024-04-17 14:34:48.572299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.008 [2024-04-17 14:34:48.572410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.944 14:34:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:40.944 14:34:49 -- common/autotest_common.sh@850 -- # return 0 00:15:40.944 14:34:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:40.944 14:34:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:40.944 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.944 14:34:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:40.944 14:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.944 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.944 14:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:40.944 14:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.944 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.944 14:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.944 14:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.944 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.944 [2024-04-17 14:34:49.415744] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.944 14:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.944 14:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.944 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.944 Malloc0 00:15:40.944 14:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:40.944 14:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.944 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.944 14:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.944 14:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.944 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.944 14:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.944 14:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.944 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.944 [2024-04-17 14:34:49.471798] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.944 14:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66423 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:40.944 14:34:49 -- nvmf/common.sh@521 -- # config=() 00:15:40.944 14:34:49 -- nvmf/common.sh@521 -- # local subsystem config 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@30 -- # READ_PID=66425 00:15:40.944 14:34:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:40.944 14:34:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:40.944 { 00:15:40.944 "params": { 00:15:40.944 "name": "Nvme$subsystem", 00:15:40.944 "trtype": "$TEST_TRANSPORT", 00:15:40.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.944 "adrfam": "ipv4", 00:15:40.944 "trsvcid": "$NVMF_PORT", 00:15:40.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.944 "hdgst": ${hdgst:-false}, 00:15:40.944 "ddgst": ${ddgst:-false} 00:15:40.944 }, 00:15:40.944 "method": "bdev_nvme_attach_controller" 00:15:40.944 } 00:15:40.944 EOF 00:15:40.944 )") 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:40.944 14:34:49 -- nvmf/common.sh@521 -- # config=() 00:15:40.944 14:34:49 -- nvmf/common.sh@521 -- # local subsystem config 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66427 00:15:40.944 14:34:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:40.944 14:34:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:40.944 { 00:15:40.944 "params": { 00:15:40.944 "name": "Nvme$subsystem", 00:15:40.944 "trtype": "$TEST_TRANSPORT", 00:15:40.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.944 "adrfam": "ipv4", 00:15:40.944 "trsvcid": "$NVMF_PORT", 00:15:40.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.944 "hdgst": ${hdgst:-false}, 00:15:40.944 "ddgst": ${ddgst:-false} 00:15:40.944 }, 00:15:40.944 "method": "bdev_nvme_attach_controller" 00:15:40.944 } 00:15:40.944 EOF 00:15:40.944 )") 00:15:40.944 14:34:49 -- nvmf/common.sh@543 -- # cat 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66430 00:15:40.944 14:34:49 -- nvmf/common.sh@543 -- # cat 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@35 -- # sync 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:40.944 14:34:49 -- nvmf/common.sh@521 -- # config=() 00:15:40.944 14:34:49 -- nvmf/common.sh@521 -- # local subsystem config 00:15:40.944 14:34:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:40.944 14:34:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:40.944 { 00:15:40.944 "params": { 00:15:40.944 "name": "Nvme$subsystem", 00:15:40.944 "trtype": "$TEST_TRANSPORT", 00:15:40.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.944 "adrfam": "ipv4", 00:15:40.944 "trsvcid": "$NVMF_PORT", 00:15:40.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.944 "hdgst": ${hdgst:-false}, 00:15:40.944 "ddgst": ${ddgst:-false} 00:15:40.944 }, 00:15:40.944 "method": "bdev_nvme_attach_controller" 00:15:40.944 } 00:15:40.944 EOF 00:15:40.944 )") 00:15:40.944 14:34:49 -- nvmf/common.sh@545 -- # jq . 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:40.944 14:34:49 -- nvmf/common.sh@543 -- # cat 00:15:40.944 14:34:49 -- nvmf/common.sh@545 -- # jq . 00:15:40.944 14:34:49 -- nvmf/common.sh@546 -- # IFS=, 00:15:40.944 14:34:49 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:40.944 14:34:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:40.944 "params": { 00:15:40.944 "name": "Nvme1", 00:15:40.944 "trtype": "tcp", 00:15:40.944 "traddr": "10.0.0.2", 00:15:40.944 "adrfam": "ipv4", 00:15:40.944 "trsvcid": "4420", 00:15:40.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.944 "hdgst": false, 00:15:40.944 "ddgst": false 00:15:40.944 }, 00:15:40.944 "method": "bdev_nvme_attach_controller" 00:15:40.944 }' 00:15:40.944 14:34:49 -- nvmf/common.sh@521 -- # config=() 00:15:40.944 14:34:49 -- nvmf/common.sh@521 -- # local subsystem config 00:15:40.944 14:34:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:40.944 14:34:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:40.944 { 00:15:40.944 "params": { 00:15:40.944 "name": "Nvme$subsystem", 00:15:40.944 "trtype": "$TEST_TRANSPORT", 00:15:40.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:40.944 "adrfam": "ipv4", 00:15:40.944 "trsvcid": "$NVMF_PORT", 00:15:40.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:40.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:40.944 "hdgst": ${hdgst:-false}, 00:15:40.944 "ddgst": ${ddgst:-false} 00:15:40.944 }, 00:15:40.944 "method": "bdev_nvme_attach_controller" 00:15:40.944 } 00:15:40.944 EOF 00:15:40.944 )") 00:15:40.944 14:34:49 -- nvmf/common.sh@546 -- # IFS=, 00:15:40.944 14:34:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:40.944 "params": { 00:15:40.944 "name": "Nvme1", 00:15:40.944 "trtype": "tcp", 00:15:40.944 "traddr": "10.0.0.2", 00:15:40.944 "adrfam": "ipv4", 00:15:40.944 "trsvcid": "4420", 00:15:40.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.944 "hdgst": false, 00:15:40.944 "ddgst": false 00:15:40.944 }, 00:15:40.944 "method": "bdev_nvme_attach_controller" 00:15:40.944 }' 00:15:40.944 14:34:49 -- nvmf/common.sh@543 -- # cat 00:15:40.944 14:34:49 -- nvmf/common.sh@545 -- # jq . 00:15:40.944 14:34:49 -- nvmf/common.sh@546 -- # IFS=, 00:15:40.944 14:34:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:40.944 "params": { 00:15:40.944 "name": "Nvme1", 00:15:40.944 "trtype": "tcp", 00:15:40.944 "traddr": "10.0.0.2", 00:15:40.944 "adrfam": "ipv4", 00:15:40.944 "trsvcid": "4420", 00:15:40.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.944 "hdgst": false, 00:15:40.944 "ddgst": false 00:15:40.944 }, 00:15:40.944 "method": "bdev_nvme_attach_controller" 00:15:40.944 }' 00:15:40.944 14:34:49 -- nvmf/common.sh@545 -- # jq . 00:15:40.944 14:34:49 -- nvmf/common.sh@546 -- # IFS=, 00:15:40.944 14:34:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:40.944 "params": { 00:15:40.944 "name": "Nvme1", 00:15:40.944 "trtype": "tcp", 00:15:40.944 "traddr": "10.0.0.2", 00:15:40.944 "adrfam": "ipv4", 00:15:40.944 "trsvcid": "4420", 00:15:40.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.944 "hdgst": false, 00:15:40.944 "ddgst": false 00:15:40.944 }, 00:15:40.944 "method": "bdev_nvme_attach_controller" 00:15:40.944 }' 00:15:40.944 [2024-04-17 14:34:49.527724] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:40.944 [2024-04-17 14:34:49.527801] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:41.204 14:34:49 -- target/bdev_io_wait.sh@37 -- # wait 66423 00:15:41.204 [2024-04-17 14:34:49.549340] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:41.204 [2024-04-17 14:34:49.549449] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:41.204 [2024-04-17 14:34:49.576695] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:41.204 [2024-04-17 14:34:49.576837] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:41.204 [2024-04-17 14:34:49.578972] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:41.204 [2024-04-17 14:34:49.579039] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:41.204 [2024-04-17 14:34:49.698846] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.204 [2024-04-17 14:34:49.742862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:41.204 [2024-04-17 14:34:49.743695] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.204 [2024-04-17 14:34:49.751672] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:41.204 [2024-04-17 14:34:49.780356] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.204 [2024-04-17 14:34:49.796856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:41.204 [2024-04-17 14:34:49.805666] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:41.462 [2024-04-17 14:34:49.819842] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.462 [2024-04-17 14:34:49.845987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:41.462 [2024-04-17 14:34:49.854898] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:41.462 [2024-04-17 14:34:49.862841] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:41.462 Running I/O for 1 seconds... 00:15:41.462 [2024-04-17 14:34:49.873045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:41.462 [2024-04-17 14:34:49.881854] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:41.462 [2024-04-17 14:34:49.922027] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:41.462 Running I/O for 1 seconds... 00:15:41.462 [2024-04-17 14:34:49.982767] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:41.462 Running I/O for 1 seconds... 00:15:41.462 [2024-04-17 14:34:49.991973] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:41.462 Running I/O for 1 seconds... 00:15:42.397 00:15:42.397 Latency(us) 00:15:42.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.397 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:42.398 Nvme1n1 : 1.00 167693.93 655.05 0.00 0.00 760.58 335.13 2517.18 00:15:42.398 =================================================================================================================== 00:15:42.398 Total : 167693.93 655.05 0.00 0.00 760.58 335.13 2517.18 00:15:42.398 00:15:42.398 Latency(us) 00:15:42.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.398 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:42.398 Nvme1n1 : 1.02 5364.92 20.96 0.00 0.00 23502.08 8162.21 39083.29 00:15:42.398 =================================================================================================================== 00:15:42.398 Total : 5364.92 20.96 0.00 0.00 23502.08 8162.21 39083.29 00:15:42.398 00:15:42.398 Latency(us) 00:15:42.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.398 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:42.398 Nvme1n1 : 1.01 8501.25 33.21 0.00 0.00 14984.70 8400.52 31695.59 00:15:42.398 =================================================================================================================== 00:15:42.398 Total : 8501.25 33.21 0.00 0.00 14984.70 8400.52 31695.59 00:15:42.656 00:15:42.656 Latency(us) 00:15:42.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.656 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:42.656 Nvme1n1 : 1.01 5237.17 20.46 0.00 0.00 24337.44 7804.74 49807.36 00:15:42.656 =================================================================================================================== 00:15:42.656 Total : 5237.17 20.46 0.00 0.00 24337.44 7804.74 49807.36 00:15:42.656 14:34:51 -- target/bdev_io_wait.sh@38 -- # wait 66425 00:15:42.656 14:34:51 -- target/bdev_io_wait.sh@39 -- # wait 66427 00:15:42.656 14:34:51 -- target/bdev_io_wait.sh@40 -- # wait 66430 00:15:42.656 14:34:51 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.656 14:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.656 14:34:51 -- common/autotest_common.sh@10 -- # set +x 00:15:42.656 14:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.656 14:34:51 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:42.656 14:34:51 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:42.656 14:34:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:42.656 14:34:51 -- nvmf/common.sh@117 -- # sync 00:15:42.656 14:34:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.656 14:34:51 -- nvmf/common.sh@120 -- # set +e 00:15:42.656 14:34:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.656 14:34:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.656 rmmod nvme_tcp 00:15:42.656 rmmod nvme_fabrics 00:15:42.914 rmmod nvme_keyring 00:15:42.914 14:34:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.914 14:34:51 -- nvmf/common.sh@124 -- # set -e 00:15:42.914 14:34:51 -- nvmf/common.sh@125 -- # return 0 00:15:42.914 14:34:51 -- nvmf/common.sh@478 -- # '[' -n 66387 ']' 00:15:42.914 14:34:51 -- nvmf/common.sh@479 -- # killprocess 66387 00:15:42.914 14:34:51 -- common/autotest_common.sh@936 -- # '[' -z 66387 ']' 00:15:42.914 14:34:51 -- common/autotest_common.sh@940 -- # kill -0 66387 00:15:42.914 14:34:51 -- common/autotest_common.sh@941 -- # uname 00:15:42.914 14:34:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:42.914 14:34:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66387 00:15:42.914 killing process with pid 66387 00:15:42.914 14:34:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:42.914 14:34:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:42.914 14:34:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66387' 00:15:42.914 14:34:51 -- common/autotest_common.sh@955 -- # kill 66387 00:15:42.914 14:34:51 -- common/autotest_common.sh@960 -- # wait 66387 00:15:42.914 14:34:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:42.914 14:34:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:42.914 14:34:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:42.914 14:34:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.914 14:34:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.914 14:34:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.914 14:34:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.914 14:34:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.914 14:34:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:42.914 ************************************ 00:15:42.914 END TEST nvmf_bdev_io_wait 00:15:42.914 ************************************ 00:15:42.914 00:15:42.914 real 0m3.669s 00:15:42.914 user 0m16.184s 00:15:42.914 sys 0m1.906s 00:15:42.914 14:34:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:42.915 14:34:51 -- common/autotest_common.sh@10 -- # set +x 00:15:43.173 14:34:51 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:43.173 14:34:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:43.173 14:34:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:43.173 14:34:51 -- common/autotest_common.sh@10 -- # set +x 00:15:43.173 ************************************ 00:15:43.173 START TEST nvmf_queue_depth 00:15:43.173 ************************************ 00:15:43.173 14:34:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:43.173 * Looking for test storage... 00:15:43.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:43.173 14:34:51 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.173 14:34:51 -- nvmf/common.sh@7 -- # uname -s 00:15:43.173 14:34:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.173 14:34:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.173 14:34:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.173 14:34:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.173 14:34:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.173 14:34:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.173 14:34:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.173 14:34:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.173 14:34:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.173 14:34:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.173 14:34:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:15:43.173 14:34:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:15:43.173 14:34:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.173 14:34:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.173 14:34:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.173 14:34:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.173 14:34:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.173 14:34:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.173 14:34:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.173 14:34:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.173 14:34:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.173 14:34:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.173 14:34:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.173 14:34:51 -- paths/export.sh@5 -- # export PATH 00:15:43.173 14:34:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.173 14:34:51 -- nvmf/common.sh@47 -- # : 0 00:15:43.173 14:34:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.173 14:34:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.173 14:34:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.173 14:34:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.173 14:34:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.173 14:34:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.173 14:34:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.173 14:34:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.173 14:34:51 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:43.173 14:34:51 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:43.173 14:34:51 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:43.173 14:34:51 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:43.173 14:34:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:43.173 14:34:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.173 14:34:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:43.173 14:34:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:43.173 14:34:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:43.173 14:34:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.173 14:34:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.173 14:34:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.173 14:34:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:43.173 14:34:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:43.173 14:34:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:43.173 14:34:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:43.173 14:34:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:43.173 14:34:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:43.173 14:34:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.173 14:34:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.173 14:34:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:43.173 14:34:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:43.173 14:34:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.173 14:34:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.173 14:34:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.173 14:34:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.173 14:34:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.173 14:34:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.173 14:34:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.173 14:34:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.173 14:34:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:43.173 14:34:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:43.173 Cannot find device "nvmf_tgt_br" 00:15:43.173 14:34:51 -- nvmf/common.sh@155 -- # true 00:15:43.173 14:34:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.173 Cannot find device "nvmf_tgt_br2" 00:15:43.173 14:34:51 -- nvmf/common.sh@156 -- # true 00:15:43.173 14:34:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:43.173 14:34:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:43.498 Cannot find device "nvmf_tgt_br" 00:15:43.498 14:34:51 -- nvmf/common.sh@158 -- # true 00:15:43.498 14:34:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:43.498 Cannot find device "nvmf_tgt_br2" 00:15:43.498 14:34:51 -- nvmf/common.sh@159 -- # true 00:15:43.498 14:34:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:43.498 14:34:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:43.498 14:34:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.498 14:34:51 -- nvmf/common.sh@162 -- # true 00:15:43.498 14:34:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.498 14:34:51 -- nvmf/common.sh@163 -- # true 00:15:43.498 14:34:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.498 14:34:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.498 14:34:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.498 14:34:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.498 14:34:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.498 14:34:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.498 14:34:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.498 14:34:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.498 14:34:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:43.498 14:34:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:43.498 14:34:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:43.498 14:34:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:43.498 14:34:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:43.498 14:34:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.498 14:34:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.498 14:34:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.498 14:34:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:43.498 14:34:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:43.498 14:34:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.498 14:34:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.498 14:34:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.498 14:34:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.498 14:34:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.498 14:34:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:43.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:15:43.498 00:15:43.498 --- 10.0.0.2 ping statistics --- 00:15:43.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.498 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:15:43.498 14:34:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:43.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:15:43.498 00:15:43.498 --- 10.0.0.3 ping statistics --- 00:15:43.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.498 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:43.498 14:34:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:43.498 00:15:43.498 --- 10.0.0.1 ping statistics --- 00:15:43.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.498 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:43.498 14:34:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.498 14:34:52 -- nvmf/common.sh@422 -- # return 0 00:15:43.498 14:34:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:43.498 14:34:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.498 14:34:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:43.498 14:34:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:43.498 14:34:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.498 14:34:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:43.498 14:34:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:43.498 14:34:52 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:43.498 14:34:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:43.498 14:34:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:43.498 14:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:43.498 14:34:52 -- nvmf/common.sh@470 -- # nvmfpid=66646 00:15:43.498 14:34:52 -- nvmf/common.sh@471 -- # waitforlisten 66646 00:15:43.498 14:34:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.498 14:34:52 -- common/autotest_common.sh@817 -- # '[' -z 66646 ']' 00:15:43.498 14:34:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.498 14:34:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:43.498 14:34:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.498 14:34:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:43.498 14:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:43.757 [2024-04-17 14:34:52.118040] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:43.757 [2024-04-17 14:34:52.118143] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.757 [2024-04-17 14:34:52.259774] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.757 [2024-04-17 14:34:52.316459] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.757 [2024-04-17 14:34:52.316511] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.757 [2024-04-17 14:34:52.316522] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.757 [2024-04-17 14:34:52.316531] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.757 [2024-04-17 14:34:52.316538] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.757 [2024-04-17 14:34:52.316568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.694 14:34:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:44.694 14:34:53 -- common/autotest_common.sh@850 -- # return 0 00:15:44.694 14:34:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:44.694 14:34:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:44.694 14:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 14:34:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.694 14:34:53 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:44.694 14:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.694 14:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 [2024-04-17 14:34:53.150705] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.694 14:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.694 14:34:53 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:44.694 14:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.694 14:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 Malloc0 00:15:44.694 14:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.694 14:34:53 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.694 14:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.694 14:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 14:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.694 14:34:53 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.694 14:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.694 14:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 14:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.694 14:34:53 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.694 14:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.694 14:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 [2024-04-17 14:34:53.202362] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.694 14:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.694 14:34:53 -- target/queue_depth.sh@30 -- # bdevperf_pid=66678 00:15:44.694 14:34:53 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.694 14:34:53 -- target/queue_depth.sh@33 -- # waitforlisten 66678 /var/tmp/bdevperf.sock 00:15:44.694 14:34:53 -- common/autotest_common.sh@817 -- # '[' -z 66678 ']' 00:15:44.694 14:34:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.694 14:34:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:44.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.694 14:34:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.694 14:34:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:44.694 14:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:44.694 14:34:53 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:44.694 [2024-04-17 14:34:53.268592] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:44.694 [2024-04-17 14:34:53.268722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66678 ] 00:15:44.952 [2024-04-17 14:34:53.407345] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.952 [2024-04-17 14:34:53.477034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.889 14:34:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:45.889 14:34:54 -- common/autotest_common.sh@850 -- # return 0 00:15:45.889 14:34:54 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:45.889 14:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.889 14:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:45.889 NVMe0n1 00:15:45.889 14:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.889 14:34:54 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.889 Running I/O for 10 seconds... 00:15:58.087 00:15:58.087 Latency(us) 00:15:58.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.087 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:58.087 Verification LBA range: start 0x0 length 0x4000 00:15:58.087 NVMe0n1 : 10.09 7528.98 29.41 0.00 0.00 135395.87 27405.96 98661.47 00:15:58.087 =================================================================================================================== 00:15:58.087 Total : 7528.98 29.41 0.00 0.00 135395.87 27405.96 98661.47 00:15:58.087 0 00:15:58.087 14:35:04 -- target/queue_depth.sh@39 -- # killprocess 66678 00:15:58.087 14:35:04 -- common/autotest_common.sh@936 -- # '[' -z 66678 ']' 00:15:58.087 14:35:04 -- common/autotest_common.sh@940 -- # kill -0 66678 00:15:58.087 14:35:04 -- common/autotest_common.sh@941 -- # uname 00:15:58.087 14:35:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.087 14:35:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66678 00:15:58.087 14:35:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:58.087 14:35:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:58.087 killing process with pid 66678 00:15:58.087 14:35:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66678' 00:15:58.087 Received shutdown signal, test time was about 10.000000 seconds 00:15:58.087 00:15:58.087 Latency(us) 00:15:58.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.087 =================================================================================================================== 00:15:58.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:58.087 14:35:04 -- common/autotest_common.sh@955 -- # kill 66678 00:15:58.087 14:35:04 -- common/autotest_common.sh@960 -- # wait 66678 00:15:58.087 14:35:04 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:58.087 14:35:04 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:58.087 14:35:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:58.087 14:35:04 -- nvmf/common.sh@117 -- # sync 00:15:58.087 14:35:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.087 14:35:04 -- nvmf/common.sh@120 -- # set +e 00:15:58.087 14:35:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.087 14:35:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.087 rmmod nvme_tcp 00:15:58.087 rmmod nvme_fabrics 00:15:58.087 rmmod nvme_keyring 00:15:58.087 14:35:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.087 14:35:04 -- nvmf/common.sh@124 -- # set -e 00:15:58.087 14:35:04 -- nvmf/common.sh@125 -- # return 0 00:15:58.088 14:35:04 -- nvmf/common.sh@478 -- # '[' -n 66646 ']' 00:15:58.088 14:35:04 -- nvmf/common.sh@479 -- # killprocess 66646 00:15:58.088 14:35:04 -- common/autotest_common.sh@936 -- # '[' -z 66646 ']' 00:15:58.088 14:35:04 -- common/autotest_common.sh@940 -- # kill -0 66646 00:15:58.088 14:35:04 -- common/autotest_common.sh@941 -- # uname 00:15:58.088 14:35:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.088 14:35:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66646 00:15:58.088 14:35:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:58.088 14:35:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:58.088 killing process with pid 66646 00:15:58.088 14:35:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66646' 00:15:58.088 14:35:04 -- common/autotest_common.sh@955 -- # kill 66646 00:15:58.088 14:35:04 -- common/autotest_common.sh@960 -- # wait 66646 00:15:58.088 14:35:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:58.088 14:35:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:58.088 14:35:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.088 14:35:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.088 14:35:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.088 14:35:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.088 14:35:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:58.088 ************************************ 00:15:58.088 END TEST nvmf_queue_depth 00:15:58.088 ************************************ 00:15:58.088 00:15:58.088 real 0m13.492s 00:15:58.088 user 0m23.592s 00:15:58.088 sys 0m2.085s 00:15:58.088 14:35:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:58.088 14:35:05 -- common/autotest_common.sh@10 -- # set +x 00:15:58.088 14:35:05 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:58.088 14:35:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:58.088 14:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.088 14:35:05 -- common/autotest_common.sh@10 -- # set +x 00:15:58.088 ************************************ 00:15:58.088 START TEST nvmf_multipath 00:15:58.088 ************************************ 00:15:58.088 14:35:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:58.088 * Looking for test storage... 00:15:58.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:58.088 14:35:05 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.088 14:35:05 -- nvmf/common.sh@7 -- # uname -s 00:15:58.088 14:35:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.088 14:35:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.088 14:35:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.088 14:35:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.088 14:35:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.088 14:35:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.088 14:35:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.088 14:35:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.088 14:35:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.088 14:35:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:15:58.088 14:35:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:15:58.088 14:35:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.088 14:35:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.088 14:35:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.088 14:35:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.088 14:35:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.088 14:35:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.088 14:35:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.088 14:35:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.088 14:35:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.088 14:35:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.088 14:35:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.088 14:35:05 -- paths/export.sh@5 -- # export PATH 00:15:58.088 14:35:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.088 14:35:05 -- nvmf/common.sh@47 -- # : 0 00:15:58.088 14:35:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.088 14:35:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.088 14:35:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.088 14:35:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.088 14:35:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.088 14:35:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.088 14:35:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.088 14:35:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.088 14:35:05 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.088 14:35:05 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.088 14:35:05 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:58.088 14:35:05 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.088 14:35:05 -- target/multipath.sh@43 -- # nvmftestinit 00:15:58.088 14:35:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:58.088 14:35:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.088 14:35:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:58.088 14:35:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:58.088 14:35:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:58.088 14:35:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.088 14:35:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.088 14:35:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.088 14:35:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:58.088 14:35:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.088 14:35:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.088 14:35:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:58.088 14:35:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:58.088 14:35:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.088 14:35:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.088 14:35:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.088 14:35:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.088 14:35:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.088 14:35:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.088 14:35:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.088 14:35:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.088 14:35:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:58.088 14:35:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:58.088 Cannot find device "nvmf_tgt_br" 00:15:58.088 14:35:05 -- nvmf/common.sh@155 -- # true 00:15:58.088 14:35:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.088 Cannot find device "nvmf_tgt_br2" 00:15:58.088 14:35:05 -- nvmf/common.sh@156 -- # true 00:15:58.088 14:35:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:58.088 14:35:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:58.088 Cannot find device "nvmf_tgt_br" 00:15:58.088 14:35:05 -- nvmf/common.sh@158 -- # true 00:15:58.088 14:35:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:58.088 Cannot find device "nvmf_tgt_br2" 00:15:58.088 14:35:05 -- nvmf/common.sh@159 -- # true 00:15:58.088 14:35:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:58.088 14:35:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:58.088 14:35:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.088 14:35:05 -- nvmf/common.sh@162 -- # true 00:15:58.088 14:35:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.088 14:35:05 -- nvmf/common.sh@163 -- # true 00:15:58.088 14:35:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.088 14:35:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.088 14:35:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.088 14:35:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.088 14:35:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.088 14:35:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.088 14:35:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.088 14:35:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.088 14:35:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:58.088 14:35:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:58.088 14:35:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:58.088 14:35:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:58.088 14:35:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:58.088 14:35:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.088 14:35:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.088 14:35:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.088 14:35:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:58.088 14:35:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:58.088 14:35:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.088 14:35:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.088 14:35:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.088 14:35:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.088 14:35:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.088 14:35:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:58.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:15:58.088 00:15:58.088 --- 10.0.0.2 ping statistics --- 00:15:58.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.088 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:58.088 14:35:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:58.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:58.088 00:15:58.088 --- 10.0.0.3 ping statistics --- 00:15:58.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.088 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:58.088 14:35:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:58.088 00:15:58.088 --- 10.0.0.1 ping statistics --- 00:15:58.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.088 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:58.088 14:35:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.088 14:35:05 -- nvmf/common.sh@422 -- # return 0 00:15:58.088 14:35:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:58.088 14:35:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.088 14:35:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:58.088 14:35:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.088 14:35:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:58.088 14:35:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:58.088 14:35:05 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:58.088 14:35:05 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:58.088 14:35:05 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:58.088 14:35:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:58.088 14:35:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:58.088 14:35:05 -- common/autotest_common.sh@10 -- # set +x 00:15:58.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.088 14:35:05 -- nvmf/common.sh@470 -- # nvmfpid=67002 00:15:58.088 14:35:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.088 14:35:05 -- nvmf/common.sh@471 -- # waitforlisten 67002 00:15:58.088 14:35:05 -- common/autotest_common.sh@817 -- # '[' -z 67002 ']' 00:15:58.088 14:35:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.088 14:35:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:58.088 14:35:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.088 14:35:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:58.088 14:35:05 -- common/autotest_common.sh@10 -- # set +x 00:15:58.088 [2024-04-17 14:35:05.694590] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:58.088 [2024-04-17 14:35:05.694680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.088 [2024-04-17 14:35:05.829803] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.088 [2024-04-17 14:35:05.893676] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.088 [2024-04-17 14:35:05.893741] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.088 [2024-04-17 14:35:05.893753] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.088 [2024-04-17 14:35:05.893761] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.088 [2024-04-17 14:35:05.893768] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.088 [2024-04-17 14:35:05.893898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.088 [2024-04-17 14:35:05.894085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.088 [2024-04-17 14:35:05.894670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.088 [2024-04-17 14:35:05.894717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.088 14:35:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:58.088 14:35:06 -- common/autotest_common.sh@850 -- # return 0 00:15:58.089 14:35:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:58.089 14:35:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:58.089 14:35:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.346 14:35:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.346 14:35:06 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:58.346 [2024-04-17 14:35:06.943556] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.604 14:35:06 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:58.861 Malloc0 00:15:58.861 14:35:07 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:59.118 14:35:07 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:59.376 14:35:07 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.634 [2024-04-17 14:35:08.168424] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.634 14:35:08 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:59.892 [2024-04-17 14:35:08.460737] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.892 14:35:08 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 --hostid=c475d660-18c3-4238-bb35-f293e0cc1403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:16:00.151 14:35:08 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 --hostid=c475d660-18c3-4238-bb35-f293e0cc1403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:16:00.151 14:35:08 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:16:00.151 14:35:08 -- common/autotest_common.sh@1184 -- # local i=0 00:16:00.151 14:35:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.151 14:35:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:00.151 14:35:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:02.682 14:35:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:02.682 14:35:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:02.682 14:35:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.682 14:35:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:02.682 14:35:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.682 14:35:10 -- common/autotest_common.sh@1194 -- # return 0 00:16:02.682 14:35:10 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:16:02.683 14:35:10 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:16:02.683 14:35:10 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:16:02.683 14:35:10 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:02.683 14:35:10 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:16:02.683 14:35:10 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:16:02.683 14:35:10 -- target/multipath.sh@38 -- # return 0 00:16:02.683 14:35:10 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:16:02.683 14:35:10 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:16:02.683 14:35:10 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:16:02.683 14:35:10 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:16:02.683 14:35:10 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:16:02.683 14:35:10 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:16:02.683 14:35:10 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:16:02.683 14:35:10 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:02.683 14:35:10 -- target/multipath.sh@22 -- # local timeout=20 00:16:02.683 14:35:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:02.683 14:35:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:02.683 14:35:10 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:02.683 14:35:10 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:16:02.683 14:35:10 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:02.683 14:35:10 -- target/multipath.sh@22 -- # local timeout=20 00:16:02.683 14:35:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:02.683 14:35:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:02.683 14:35:10 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:02.683 14:35:10 -- target/multipath.sh@85 -- # echo numa 00:16:02.683 14:35:10 -- target/multipath.sh@88 -- # fio_pid=67092 00:16:02.683 14:35:10 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:02.683 14:35:10 -- target/multipath.sh@90 -- # sleep 1 00:16:02.683 [global] 00:16:02.683 thread=1 00:16:02.683 invalidate=1 00:16:02.683 rw=randrw 00:16:02.683 time_based=1 00:16:02.683 runtime=6 00:16:02.683 ioengine=libaio 00:16:02.683 direct=1 00:16:02.683 bs=4096 00:16:02.683 iodepth=128 00:16:02.683 norandommap=0 00:16:02.683 numjobs=1 00:16:02.683 00:16:02.683 verify_dump=1 00:16:02.683 verify_backlog=512 00:16:02.683 verify_state_save=0 00:16:02.683 do_verify=1 00:16:02.683 verify=crc32c-intel 00:16:02.683 [job0] 00:16:02.683 filename=/dev/nvme0n1 00:16:02.683 Could not set queue depth (nvme0n1) 00:16:02.683 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.683 fio-3.35 00:16:02.683 Starting 1 thread 00:16:03.249 14:35:11 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:03.507 14:35:12 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:03.770 14:35:12 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:16:03.770 14:35:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:03.770 14:35:12 -- target/multipath.sh@22 -- # local timeout=20 00:16:03.770 14:35:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:03.770 14:35:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:03.770 14:35:12 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:03.770 14:35:12 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:16:03.770 14:35:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:03.770 14:35:12 -- target/multipath.sh@22 -- # local timeout=20 00:16:03.770 14:35:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:03.770 14:35:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:03.770 14:35:12 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:03.770 14:35:12 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:04.028 14:35:12 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:04.286 14:35:12 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:16:04.286 14:35:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:04.286 14:35:12 -- target/multipath.sh@22 -- # local timeout=20 00:16:04.286 14:35:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:04.286 14:35:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:04.286 14:35:12 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:04.286 14:35:12 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:16:04.286 14:35:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:04.286 14:35:12 -- target/multipath.sh@22 -- # local timeout=20 00:16:04.286 14:35:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:04.286 14:35:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:04.286 14:35:12 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:04.286 14:35:12 -- target/multipath.sh@104 -- # wait 67092 00:16:08.472 00:16:08.472 job0: (groupid=0, jobs=1): err= 0: pid=67118: Wed Apr 17 14:35:17 2024 00:16:08.472 read: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(239MiB/6007msec) 00:16:08.472 slat (usec): min=6, max=6209, avg=57.48, stdev=225.22 00:16:08.472 clat (usec): min=1645, max=14891, avg=8533.60, stdev=1482.97 00:16:08.472 lat (usec): min=1661, max=14902, avg=8591.08, stdev=1487.93 00:16:08.472 clat percentiles (usec): 00:16:08.472 | 1.00th=[ 4490], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 7767], 00:16:08.472 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:16:08.472 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[12256], 00:16:08.472 | 99.00th=[13304], 99.50th=[13566], 99.90th=[13960], 99.95th=[14091], 00:16:08.472 | 99.99th=[14484] 00:16:08.472 bw ( KiB/s): min=10280, max=27232, per=52.82%, avg=21560.00, stdev=5214.26, samples=12 00:16:08.472 iops : min= 2570, max= 6808, avg=5390.00, stdev=1303.56, samples=12 00:16:08.472 write: IOPS=5987, BW=23.4MiB/s (24.5MB/s)(127MiB/5412msec); 0 zone resets 00:16:08.472 slat (usec): min=14, max=3514, avg=65.72, stdev=158.63 00:16:08.472 clat (usec): min=2113, max=14454, avg=7375.93, stdev=1290.18 00:16:08.472 lat (usec): min=2138, max=14819, avg=7441.65, stdev=1294.38 00:16:08.472 clat percentiles (usec): 00:16:08.472 | 1.00th=[ 3458], 5.00th=[ 4424], 10.00th=[ 5735], 20.00th=[ 6915], 00:16:08.472 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7767], 00:16:08.472 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8717], 00:16:08.472 | 99.00th=[11600], 99.50th=[11994], 99.90th=[13435], 99.95th=[13698], 00:16:08.472 | 99.99th=[13960] 00:16:08.472 bw ( KiB/s): min=10464, max=26800, per=90.01%, avg=21556.67, stdev=5041.10, samples=12 00:16:08.472 iops : min= 2616, max= 6700, avg=5389.17, stdev=1260.28, samples=12 00:16:08.472 lat (msec) : 2=0.02%, 4=1.25%, 10=92.80%, 20=5.93% 00:16:08.472 cpu : usr=5.64%, sys=22.26%, ctx=5415, majf=0, minf=78 00:16:08.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:08.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.472 issued rwts: total=61303,32402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.472 00:16:08.472 Run status group 0 (all jobs): 00:16:08.472 READ: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=239MiB (251MB), run=6007-6007msec 00:16:08.472 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=127MiB (133MB), run=5412-5412msec 00:16:08.472 00:16:08.472 Disk stats (read/write): 00:16:08.472 nvme0n1: ios=60454/31793, merge=0/0, ticks=493813/219677, in_queue=713490, util=98.65% 00:16:08.730 14:35:17 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:09.002 14:35:17 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:09.272 14:35:17 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:16:09.272 14:35:17 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:09.272 14:35:17 -- target/multipath.sh@22 -- # local timeout=20 00:16:09.272 14:35:17 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:09.272 14:35:17 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:09.272 14:35:17 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:09.272 14:35:17 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:16:09.272 14:35:17 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:09.272 14:35:17 -- target/multipath.sh@22 -- # local timeout=20 00:16:09.272 14:35:17 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:09.272 14:35:17 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:09.272 14:35:17 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:09.272 14:35:17 -- target/multipath.sh@113 -- # echo round-robin 00:16:09.272 14:35:17 -- target/multipath.sh@116 -- # fio_pid=67198 00:16:09.272 14:35:17 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:09.272 14:35:17 -- target/multipath.sh@118 -- # sleep 1 00:16:09.272 [global] 00:16:09.272 thread=1 00:16:09.272 invalidate=1 00:16:09.272 rw=randrw 00:16:09.272 time_based=1 00:16:09.272 runtime=6 00:16:09.272 ioengine=libaio 00:16:09.272 direct=1 00:16:09.272 bs=4096 00:16:09.272 iodepth=128 00:16:09.272 norandommap=0 00:16:09.272 numjobs=1 00:16:09.272 00:16:09.272 verify_dump=1 00:16:09.272 verify_backlog=512 00:16:09.272 verify_state_save=0 00:16:09.272 do_verify=1 00:16:09.272 verify=crc32c-intel 00:16:09.272 [job0] 00:16:09.272 filename=/dev/nvme0n1 00:16:09.272 Could not set queue depth (nvme0n1) 00:16:09.272 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.272 fio-3.35 00:16:09.272 Starting 1 thread 00:16:10.204 14:35:18 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:10.462 14:35:18 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:10.721 14:35:19 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:16:10.721 14:35:19 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:10.721 14:35:19 -- target/multipath.sh@22 -- # local timeout=20 00:16:10.721 14:35:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:10.721 14:35:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:10.721 14:35:19 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:10.721 14:35:19 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:16:10.721 14:35:19 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:10.721 14:35:19 -- target/multipath.sh@22 -- # local timeout=20 00:16:10.721 14:35:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:10.721 14:35:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:10.721 14:35:19 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:10.721 14:35:19 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:10.980 14:35:19 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:11.239 14:35:19 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:16:11.239 14:35:19 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:11.239 14:35:19 -- target/multipath.sh@22 -- # local timeout=20 00:16:11.239 14:35:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:11.239 14:35:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:11.239 14:35:19 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:11.239 14:35:19 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:16:11.239 14:35:19 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:11.239 14:35:19 -- target/multipath.sh@22 -- # local timeout=20 00:16:11.239 14:35:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:11.239 14:35:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:11.239 14:35:19 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:11.239 14:35:19 -- target/multipath.sh@132 -- # wait 67198 00:16:15.424 00:16:15.424 job0: (groupid=0, jobs=1): err= 0: pid=67219: Wed Apr 17 14:35:23 2024 00:16:15.424 read: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(263MiB/6002msec) 00:16:15.424 slat (usec): min=2, max=6512, avg=44.26, stdev=195.39 00:16:15.424 clat (usec): min=324, max=15595, avg=7776.64, stdev=1847.55 00:16:15.424 lat (usec): min=344, max=15604, avg=7820.89, stdev=1863.39 00:16:15.424 clat percentiles (usec): 00:16:15.424 | 1.00th=[ 3720], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 6128], 00:16:15.424 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8291], 00:16:15.424 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11338], 00:16:15.424 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14222], 99.95th=[14746], 00:16:15.424 | 99.99th=[15008] 00:16:15.424 bw ( KiB/s): min=13168, max=35241, per=53.09%, avg=23792.45, stdev=6907.22, samples=11 00:16:15.424 iops : min= 3292, max= 8810, avg=5948.09, stdev=1726.76, samples=11 00:16:15.424 write: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(140MiB/5431msec); 0 zone resets 00:16:15.424 slat (usec): min=3, max=6877, avg=56.23, stdev=144.89 00:16:15.424 clat (usec): min=698, max=15072, avg=6670.38, stdev=1741.78 00:16:15.424 lat (usec): min=722, max=15076, avg=6726.62, stdev=1757.25 00:16:15.424 clat percentiles (usec): 00:16:15.424 | 1.00th=[ 2868], 5.00th=[ 3556], 10.00th=[ 4015], 20.00th=[ 4752], 00:16:15.424 | 30.00th=[ 5800], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 7504], 00:16:15.424 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:16:15.424 | 99.00th=[11076], 99.50th=[11863], 99.90th=[13435], 99.95th=[13829], 00:16:15.424 | 99.99th=[14877] 00:16:15.424 bw ( KiB/s): min=13664, max=34778, per=90.21%, avg=23804.91, stdev=6647.45, samples=11 00:16:15.424 iops : min= 3416, max= 8694, avg=5951.09, stdev=1661.66, samples=11 00:16:15.424 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:16:15.424 lat (msec) : 2=0.10%, 4=4.27%, 10=91.31%, 20=4.27% 00:16:15.424 cpu : usr=6.32%, sys=22.83%, ctx=5978, majf=0, minf=114 00:16:15.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:15.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.424 issued rwts: total=67244,35827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.424 00:16:15.424 Run status group 0 (all jobs): 00:16:15.424 READ: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=263MiB (275MB), run=6002-6002msec 00:16:15.424 WRITE: bw=25.8MiB/s (27.0MB/s), 25.8MiB/s-25.8MiB/s (27.0MB/s-27.0MB/s), io=140MiB (147MB), run=5431-5431msec 00:16:15.424 00:16:15.424 Disk stats (read/write): 00:16:15.424 nvme0n1: ios=66524/35082, merge=0/0, ticks=491613/215530, in_queue=707143, util=98.65% 00:16:15.424 14:35:23 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:15.424 14:35:23 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.424 14:35:23 -- common/autotest_common.sh@1205 -- # local i=0 00:16:15.424 14:35:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:15.424 14:35:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.424 14:35:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.424 14:35:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:15.424 14:35:24 -- common/autotest_common.sh@1217 -- # return 0 00:16:15.424 14:35:24 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.683 14:35:24 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:16:15.683 14:35:24 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:16:15.683 14:35:24 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:16:15.683 14:35:24 -- target/multipath.sh@144 -- # nvmftestfini 00:16:15.683 14:35:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:15.683 14:35:24 -- nvmf/common.sh@117 -- # sync 00:16:15.683 14:35:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:15.683 14:35:24 -- nvmf/common.sh@120 -- # set +e 00:16:15.683 14:35:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:15.683 14:35:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:15.941 rmmod nvme_tcp 00:16:15.941 rmmod nvme_fabrics 00:16:15.941 rmmod nvme_keyring 00:16:15.941 14:35:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:15.941 14:35:24 -- nvmf/common.sh@124 -- # set -e 00:16:15.941 14:35:24 -- nvmf/common.sh@125 -- # return 0 00:16:15.941 14:35:24 -- nvmf/common.sh@478 -- # '[' -n 67002 ']' 00:16:15.941 14:35:24 -- nvmf/common.sh@479 -- # killprocess 67002 00:16:15.941 14:35:24 -- common/autotest_common.sh@936 -- # '[' -z 67002 ']' 00:16:15.941 14:35:24 -- common/autotest_common.sh@940 -- # kill -0 67002 00:16:15.941 14:35:24 -- common/autotest_common.sh@941 -- # uname 00:16:15.941 14:35:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:15.941 14:35:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67002 00:16:15.941 killing process with pid 67002 00:16:15.941 14:35:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:15.941 14:35:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:15.941 14:35:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67002' 00:16:15.941 14:35:24 -- common/autotest_common.sh@955 -- # kill 67002 00:16:15.941 14:35:24 -- common/autotest_common.sh@960 -- # wait 67002 00:16:16.199 14:35:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:16.199 14:35:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:16.199 14:35:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:16.199 14:35:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.199 14:35:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.199 14:35:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.199 14:35:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.199 14:35:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.199 14:35:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:16.199 00:16:16.199 real 0m19.425s 00:16:16.199 user 1m13.656s 00:16:16.199 sys 0m9.336s 00:16:16.199 14:35:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:16.199 14:35:24 -- common/autotest_common.sh@10 -- # set +x 00:16:16.199 ************************************ 00:16:16.199 END TEST nvmf_multipath 00:16:16.199 ************************************ 00:16:16.199 14:35:24 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:16.199 14:35:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:16.199 14:35:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.199 14:35:24 -- common/autotest_common.sh@10 -- # set +x 00:16:16.199 ************************************ 00:16:16.199 START TEST nvmf_zcopy 00:16:16.199 ************************************ 00:16:16.199 14:35:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:16.458 * Looking for test storage... 00:16:16.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:16.458 14:35:24 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.458 14:35:24 -- nvmf/common.sh@7 -- # uname -s 00:16:16.458 14:35:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.458 14:35:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.458 14:35:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.458 14:35:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.458 14:35:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.458 14:35:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.458 14:35:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.458 14:35:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.458 14:35:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.459 14:35:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.459 14:35:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:16:16.459 14:35:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:16:16.459 14:35:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.459 14:35:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.459 14:35:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.459 14:35:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.459 14:35:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.459 14:35:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.459 14:35:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.459 14:35:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.459 14:35:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.459 14:35:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.459 14:35:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.459 14:35:24 -- paths/export.sh@5 -- # export PATH 00:16:16.459 14:35:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.459 14:35:24 -- nvmf/common.sh@47 -- # : 0 00:16:16.459 14:35:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.459 14:35:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.459 14:35:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.459 14:35:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.459 14:35:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.459 14:35:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.459 14:35:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.459 14:35:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.459 14:35:24 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:16.459 14:35:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:16.459 14:35:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.459 14:35:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:16.459 14:35:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:16.459 14:35:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:16.459 14:35:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.459 14:35:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.459 14:35:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.459 14:35:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:16.459 14:35:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:16.459 14:35:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:16.459 14:35:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:16.459 14:35:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:16.459 14:35:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:16.459 14:35:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.459 14:35:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.459 14:35:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:16.459 14:35:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:16.459 14:35:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.459 14:35:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.459 14:35:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.459 14:35:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.459 14:35:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.459 14:35:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.459 14:35:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.459 14:35:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.459 14:35:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:16.459 14:35:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:16.459 Cannot find device "nvmf_tgt_br" 00:16:16.459 14:35:24 -- nvmf/common.sh@155 -- # true 00:16:16.459 14:35:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.459 Cannot find device "nvmf_tgt_br2" 00:16:16.459 14:35:24 -- nvmf/common.sh@156 -- # true 00:16:16.459 14:35:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:16.459 14:35:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:16.459 Cannot find device "nvmf_tgt_br" 00:16:16.459 14:35:24 -- nvmf/common.sh@158 -- # true 00:16:16.459 14:35:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:16.459 Cannot find device "nvmf_tgt_br2" 00:16:16.459 14:35:24 -- nvmf/common.sh@159 -- # true 00:16:16.459 14:35:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:16.459 14:35:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:16.459 14:35:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.459 14:35:24 -- nvmf/common.sh@162 -- # true 00:16:16.459 14:35:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.459 14:35:24 -- nvmf/common.sh@163 -- # true 00:16:16.459 14:35:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.459 14:35:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.459 14:35:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.459 14:35:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.459 14:35:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.459 14:35:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.459 14:35:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.459 14:35:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.459 14:35:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.718 14:35:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:16.718 14:35:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:16.718 14:35:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:16.718 14:35:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:16.718 14:35:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.718 14:35:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.718 14:35:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.718 14:35:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:16.718 14:35:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:16.718 14:35:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.718 14:35:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.718 14:35:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.718 14:35:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.718 14:35:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.718 14:35:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:16.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:16:16.718 00:16:16.718 --- 10.0.0.2 ping statistics --- 00:16:16.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.718 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:16.718 14:35:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:16.718 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.718 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:16:16.718 00:16:16.718 --- 10.0.0.3 ping statistics --- 00:16:16.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.718 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:16.718 14:35:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:16:16.718 00:16:16.718 --- 10.0.0.1 ping statistics --- 00:16:16.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.718 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:16.718 14:35:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.718 14:35:25 -- nvmf/common.sh@422 -- # return 0 00:16:16.718 14:35:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:16.718 14:35:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.718 14:35:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:16.718 14:35:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:16.718 14:35:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.718 14:35:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:16.718 14:35:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:16.718 14:35:25 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:16.718 14:35:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:16.718 14:35:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:16.718 14:35:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.718 14:35:25 -- nvmf/common.sh@470 -- # nvmfpid=67471 00:16:16.718 14:35:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:16.718 14:35:25 -- nvmf/common.sh@471 -- # waitforlisten 67471 00:16:16.718 14:35:25 -- common/autotest_common.sh@817 -- # '[' -z 67471 ']' 00:16:16.718 14:35:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.718 14:35:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:16.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.718 14:35:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.718 14:35:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:16.719 14:35:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.719 [2024-04-17 14:35:25.251249] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:16.719 [2024-04-17 14:35:25.251376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.977 [2024-04-17 14:35:25.387493] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.977 [2024-04-17 14:35:25.455300] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.977 [2024-04-17 14:35:25.455585] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.977 [2024-04-17 14:35:25.455688] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.977 [2024-04-17 14:35:25.455781] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.977 [2024-04-17 14:35:25.455893] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.977 [2024-04-17 14:35:25.456080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.911 14:35:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:17.911 14:35:26 -- common/autotest_common.sh@850 -- # return 0 00:16:17.911 14:35:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:17.911 14:35:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:17.911 14:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 14:35:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.911 14:35:26 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:17.911 14:35:26 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:17.911 14:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.911 14:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 [2024-04-17 14:35:26.245163] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.911 14:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.911 14:35:26 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:17.911 14:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.911 14:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 14:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.911 14:35:26 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.911 14:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.911 14:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 [2024-04-17 14:35:26.261272] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.911 14:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.911 14:35:26 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:17.911 14:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.911 14:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 14:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.911 14:35:26 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:17.911 14:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.911 14:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 malloc0 00:16:17.911 14:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.911 14:35:26 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:17.911 14:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.911 14:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 14:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.911 14:35:26 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:17.911 14:35:26 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:17.911 14:35:26 -- nvmf/common.sh@521 -- # config=() 00:16:17.911 14:35:26 -- nvmf/common.sh@521 -- # local subsystem config 00:16:17.911 14:35:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:17.912 14:35:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:17.912 { 00:16:17.912 "params": { 00:16:17.912 "name": "Nvme$subsystem", 00:16:17.912 "trtype": "$TEST_TRANSPORT", 00:16:17.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.912 "adrfam": "ipv4", 00:16:17.912 "trsvcid": "$NVMF_PORT", 00:16:17.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.912 "hdgst": ${hdgst:-false}, 00:16:17.912 "ddgst": ${ddgst:-false} 00:16:17.912 }, 00:16:17.912 "method": "bdev_nvme_attach_controller" 00:16:17.912 } 00:16:17.912 EOF 00:16:17.912 )") 00:16:17.912 14:35:26 -- nvmf/common.sh@543 -- # cat 00:16:17.912 14:35:26 -- nvmf/common.sh@545 -- # jq . 00:16:17.912 14:35:26 -- nvmf/common.sh@546 -- # IFS=, 00:16:17.912 14:35:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:17.912 "params": { 00:16:17.912 "name": "Nvme1", 00:16:17.912 "trtype": "tcp", 00:16:17.912 "traddr": "10.0.0.2", 00:16:17.912 "adrfam": "ipv4", 00:16:17.912 "trsvcid": "4420", 00:16:17.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.912 "hdgst": false, 00:16:17.912 "ddgst": false 00:16:17.912 }, 00:16:17.912 "method": "bdev_nvme_attach_controller" 00:16:17.912 }' 00:16:17.912 [2024-04-17 14:35:26.339774] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:17.912 [2024-04-17 14:35:26.339858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67504 ] 00:16:17.912 [2024-04-17 14:35:26.475397] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.170 [2024-04-17 14:35:26.545559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.170 [2024-04-17 14:35:26.554466] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:16:18.170 [2024-04-17 14:35:26.686286] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:16:18.170 Running I/O for 10 seconds... 00:16:28.180 00:16:28.180 Latency(us) 00:16:28.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.180 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:28.180 Verification LBA range: start 0x0 length 0x1000 00:16:28.180 Nvme1n1 : 10.02 5351.58 41.81 0.00 0.00 23840.86 1050.07 35746.91 00:16:28.180 =================================================================================================================== 00:16:28.180 Total : 5351.58 41.81 0.00 0.00 23840.86 1050.07 35746.91 00:16:28.438 14:35:36 -- target/zcopy.sh@39 -- # perfpid=67621 00:16:28.438 14:35:36 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:28.438 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:28.438 14:35:36 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:28.438 14:35:36 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:28.438 14:35:36 -- nvmf/common.sh@521 -- # config=() 00:16:28.438 14:35:36 -- nvmf/common.sh@521 -- # local subsystem config 00:16:28.438 14:35:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.438 14:35:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.438 { 00:16:28.438 "params": { 00:16:28.438 "name": "Nvme$subsystem", 00:16:28.438 "trtype": "$TEST_TRANSPORT", 00:16:28.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.438 "adrfam": "ipv4", 00:16:28.438 "trsvcid": "$NVMF_PORT", 00:16:28.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.438 "hdgst": ${hdgst:-false}, 00:16:28.438 "ddgst": ${ddgst:-false} 00:16:28.438 }, 00:16:28.438 "method": "bdev_nvme_attach_controller" 00:16:28.438 } 00:16:28.438 EOF 00:16:28.438 )") 00:16:28.438 14:35:36 -- nvmf/common.sh@543 -- # cat 00:16:28.438 [2024-04-17 14:35:36.922275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:36.922332] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 14:35:36 -- nvmf/common.sh@545 -- # jq . 00:16:28.438 14:35:36 -- nvmf/common.sh@546 -- # IFS=, 00:16:28.438 14:35:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:28.438 "params": { 00:16:28.438 "name": "Nvme1", 00:16:28.438 "trtype": "tcp", 00:16:28.438 "traddr": "10.0.0.2", 00:16:28.438 "adrfam": "ipv4", 00:16:28.438 "trsvcid": "4420", 00:16:28.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.438 "hdgst": false, 00:16:28.438 "ddgst": false 00:16:28.438 }, 00:16:28.438 "method": "bdev_nvme_attach_controller" 00:16:28.438 }' 00:16:28.438 [2024-04-17 14:35:36.930253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:36.930292] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:36.938232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:36.938274] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:36.950299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:36.950364] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:36.962286] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:36.962341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:36.962312] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:28.438 [2024-04-17 14:35:36.962415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67621 ] 00:16:28.438 [2024-04-17 14:35:36.974267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:36.974333] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:36.986302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:36.986364] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:36.998274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:36.998324] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:37.006251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:37.006292] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:37.014252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:37.014291] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:37.026312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:37.026376] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.438 [2024-04-17 14:35:37.034272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.438 [2024-04-17 14:35:37.034313] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.046300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.046353] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.054278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.054323] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.062292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.062345] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.070306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.070358] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.082289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.082335] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.094312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.094372] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.099643] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.696 [2024-04-17 14:35:37.106340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.106410] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.118310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.118359] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.130311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.130364] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.142320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.142372] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.154315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.154363] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.162304] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.162343] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.170313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.170356] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.182348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.182399] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.184727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.696 [2024-04-17 14:35:37.193912] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:16:28.696 [2024-04-17 14:35:37.194339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.194388] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.206345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.206407] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.218338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.218390] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.230367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.230428] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.242344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.242389] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.254362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.254411] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.266349] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.266411] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.278375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.278434] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.696 [2024-04-17 14:35:37.290410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.696 [2024-04-17 14:35:37.290473] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.302416] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.302489] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.329142] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.329198] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.341484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.341561] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.351285] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:16:28.986 [2024-04-17 14:35:37.353424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.353483] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 Running I/O for 5 seconds... 00:16:28.986 [2024-04-17 14:35:37.361536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.361592] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.377398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.377472] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.392555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.392626] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.403437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.403501] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.415587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.415652] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.430241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.430295] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.441276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.441340] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.457075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.457153] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.472204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.472275] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.482812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.482865] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.496839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.496905] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.509740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.509814] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.523133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.523195] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.538011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.538077] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.553374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.553435] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.570194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.570266] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.986 [2024-04-17 14:35:37.585620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.986 [2024-04-17 14:35:37.585698] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.243 [2024-04-17 14:35:37.596129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.243 [2024-04-17 14:35:37.596197] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.243 [2024-04-17 14:35:37.609938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.243 [2024-04-17 14:35:37.610012] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.243 [2024-04-17 14:35:37.624923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.243 [2024-04-17 14:35:37.625016] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.243 [2024-04-17 14:35:37.641526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.243 [2024-04-17 14:35:37.641590] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.243 [2024-04-17 14:35:37.658572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.243 [2024-04-17 14:35:37.658635] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.669323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.669385] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.681333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.681389] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.696547] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.696619] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.707491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.707553] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.722460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.722511] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.737971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.738037] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.755471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.755534] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.771614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.771684] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.782483] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.782548] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.796056] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.796133] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.808652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.808707] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.822181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.822237] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.244 [2024-04-17 14:35:37.838120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.244 [2024-04-17 14:35:37.838173] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.849508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.849578] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.863755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.863843] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.875696] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.875752] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.888842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.888909] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.901787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.901860] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.914581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.914652] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.929492] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.929565] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.942450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.942519] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.955052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.955114] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.970509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.970580] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.981298] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.981367] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:37.993313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:37.993378] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:38.005459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:38.005536] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:38.022148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:38.022211] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:38.033500] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:38.033555] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:38.047434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:38.047507] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:38.060261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:38.060324] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:38.075692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:38.075764] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:38.087553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:38.087622] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.502 [2024-04-17 14:35:38.102110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.502 [2024-04-17 14:35:38.102182] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.118602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.118653] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.135046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.135117] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.145928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.146009] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.162065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.162142] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.177613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.177684] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.194707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.194775] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.205343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.205408] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.218925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.218996] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.234147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.234202] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.251732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.251789] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.268817] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.268874] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.284590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.284647] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.300647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.300700] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.311307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.311359] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.328280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.328341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.343390] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.343458] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-17 14:35:38.354612] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-17 14:35:38.354679] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.018 [2024-04-17 14:35:38.367018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.018 [2024-04-17 14:35:38.367080] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.018 [2024-04-17 14:35:38.382249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.018 [2024-04-17 14:35:38.382346] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.018 [2024-04-17 14:35:38.393372] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.018 [2024-04-17 14:35:38.393440] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.018 [2024-04-17 14:35:38.408401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.018 [2024-04-17 14:35:38.408477] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.018 [2024-04-17 14:35:38.419507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.018 [2024-04-17 14:35:38.419570] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.018 [2024-04-17 14:35:38.432923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.018 [2024-04-17 14:35:38.433001] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.018 [2024-04-17 14:35:38.448145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.448214] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.459453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.459520] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.475530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.475609] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.487263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.487323] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.501836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.501912] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.517220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.517312] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.527451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.527520] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.541004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.541078] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.556278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.556354] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.566839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.566896] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.580314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.580382] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.596150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.596207] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.019 [2024-04-17 14:35:38.612217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.019 [2024-04-17 14:35:38.612257] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.622120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.622158] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.634304] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.634342] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.645544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.645582] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.659098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.659135] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.675414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.675466] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.692383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.692435] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.709724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.709774] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.725442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.725477] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.734784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.734820] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.751174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.751214] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.762017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.762051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.777235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.777296] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.792535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.792578] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.802311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.802352] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.818906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.818983] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.834640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.834683] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.844288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.844343] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.861269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.861334] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.278 [2024-04-17 14:35:38.876435] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.278 [2024-04-17 14:35:38.876477] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:38.885933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:38.885981] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:38.902240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:38.902279] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:38.917504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:38.917541] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:38.932921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:38.932968] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:38.942562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:38.942599] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:38.959232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:38.959270] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:38.975381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:38.975419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:38.992089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:38.992126] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.008292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.008330] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.026344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.026382] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.041259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.041297] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.051117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.051156] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.067401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.067439] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.077431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.077469] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.089038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.089085] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.100250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.100287] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.113374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.113412] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.123619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.123656] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.537 [2024-04-17 14:35:39.135709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.537 [2024-04-17 14:35:39.135747] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.147121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.147156] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.160988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.161025] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.176858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.176895] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.193545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.193582] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.211705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.211745] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.226878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.226920] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.236499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.236535] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.252573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.252611] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.268653] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.268694] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.287204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.287244] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.302717] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.302755] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.319023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.796 [2024-04-17 14:35:39.319060] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.796 [2024-04-17 14:35:39.334865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-17 14:35:39.334903] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-17 14:35:39.345014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-17 14:35:39.345050] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-17 14:35:39.359756] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-17 14:35:39.359794] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-17 14:35:39.374567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-17 14:35:39.374606] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-17 14:35:39.389735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-17 14:35:39.389773] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.055 [2024-04-17 14:35:39.399747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.399785] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.414735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.414773] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.430097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.430135] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.440297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.440333] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.456867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.456909] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.473630] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.473667] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.490328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.490366] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.506316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.506352] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.515783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.515820] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.532610] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.532648] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.548220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.548266] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.564459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.564497] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.581230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.581270] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.599084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.599120] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.614485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.614521] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.624917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.624970] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.640708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.640745] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-17 14:35:39.656736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-17 14:35:39.656779] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.674984] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.675021] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.690031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.690068] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.706278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.706316] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.722864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.722904] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.739646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.739683] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.755615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.755652] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.765289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.765323] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.777613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.777651] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.793327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.793361] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.810730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.810764] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.827736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.827770] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.843472] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.843506] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.860081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.860114] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.875714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.875774] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.885668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.885728] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.314 [2024-04-17 14:35:39.902512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.314 [2024-04-17 14:35:39.902571] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:39.916657] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:39.916702] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:39.932815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:39.932865] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:39.949578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:39.949639] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:39.966667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:39.966713] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:39.982252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:39.982295] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:39.992160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:39.992220] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.008873] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.008924] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.024216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.024263] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.034375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.034411] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.049487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.049521] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.059564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.059598] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.071410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.071445] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.086694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.086731] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.103806] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.103846] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.118737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.118777] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.134833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.134875] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.151650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.151687] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.574 [2024-04-17 14:35:40.167897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.574 [2024-04-17 14:35:40.167935] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.177297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.177331] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.189605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.189638] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.201047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.201081] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.212330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.212362] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.223625] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.223658] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.238924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.238968] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.248610] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.248643] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.269350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.269402] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.288112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.288185] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.306602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.306654] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.323855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.323928] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.337372] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.337427] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.357086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.357137] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.372370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.372446] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.391009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.391087] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.409791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.409843] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.833 [2024-04-17 14:35:40.426378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.833 [2024-04-17 14:35:40.426419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.443711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.443754] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.460837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.460879] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.477362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.477402] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.494721] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.494761] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.511571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.511613] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.528443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.528485] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.545962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.546000] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.561895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.561969] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.578370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.578410] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.595272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.595317] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.613521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.613562] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.630128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.630168] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.647532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.647573] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.664320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.664371] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.092 [2024-04-17 14:35:40.681583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.092 [2024-04-17 14:35:40.681623] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.697332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.697377] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.713793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.713835] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.730493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.730531] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.747121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.747158] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.763685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.763731] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.782091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.782129] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.797442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.797475] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.813829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.813862] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.831995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.832031] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.847158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.847191] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.856800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.856833] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.873046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.873079] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.889168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.889203] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.905464] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.905497] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.922123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.922156] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.351 [2024-04-17 14:35:40.938586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.351 [2024-04-17 14:35:40.938620] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:40.957488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:40.957527] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:40.972275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:40.972312] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:40.988381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:40.988417] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.005111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.005144] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.023498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.023534] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.038688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.038723] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.049062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.049094] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.065733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.065771] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.081705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.081741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.099998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.100032] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.114969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.115002] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.124684] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.124731] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.141157] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.141195] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.156312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.156346] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.165984] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.166016] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.181891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.181924] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.610 [2024-04-17 14:35:41.199407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.610 [2024-04-17 14:35:41.199442] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.215129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.215168] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.233675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.233711] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.249117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.249151] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.265650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.265684] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.284292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.284328] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.299698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.299732] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.317336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.317370] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.333038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.333072] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.342453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.342486] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.358495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.358528] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.375383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.375417] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.391518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.391551] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.406612] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.406673] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.424262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.424319] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.440290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.440337] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.869 [2024-04-17 14:35:41.456872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.869 [2024-04-17 14:35:41.456911] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.475925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.475986] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.491279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.491329] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.507235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.507275] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.524880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.524943] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.541181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.541231] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.550914] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.550963] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.566533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.566568] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.584025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.584060] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.599928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.599973] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.617767] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.617802] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.633651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.633692] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.650351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.650396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.668676] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.668713] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.684392] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.684426] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.701067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.701100] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.149 [2024-04-17 14:35:41.717254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.149 [2024-04-17 14:35:41.717290] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.735393] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.735428] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.750372] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.750407] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.765862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.765899] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.784459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.784495] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.799866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.799900] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.818772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.818825] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.833941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.834001] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.844121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.844183] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.860393] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.860455] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.877310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.877378] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.893968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.894024] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.910405] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.910452] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.927137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.927175] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.942937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.943008] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.952412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.952449] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.968846] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.968904] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:41.984722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:41.984786] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.408 [2024-04-17 14:35:42.002161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.408 [2024-04-17 14:35:42.002223] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.666 [2024-04-17 14:35:42.018417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.666 [2024-04-17 14:35:42.018478] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.666 [2024-04-17 14:35:42.035015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.666 [2024-04-17 14:35:42.035066] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.666 [2024-04-17 14:35:42.051891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.666 [2024-04-17 14:35:42.051937] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.666 [2024-04-17 14:35:42.067767] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.067825] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.084844] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.084905] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.100859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.100933] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.117273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.117342] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.136002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.136055] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.151028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.151082] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.160802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.160869] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.176388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.176452] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.193801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.193884] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.210029] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.210092] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.228445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.228500] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.243632] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.243685] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.667 [2024-04-17 14:35:42.253729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.667 [2024-04-17 14:35:42.253784] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.269619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.269709] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.286532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.286602] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.303724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.303823] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.317968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.318060] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.334330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.334407] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.351122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.351200] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.362648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.362719] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 00:16:33.925 Latency(us) 00:16:33.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.925 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:33.925 Nvme1n1 : 5.01 10657.95 83.27 0.00 0.00 11996.36 4706.68 27644.28 00:16:33.925 =================================================================================================================== 00:16:33.925 Total : 10657.95 83.27 0.00 0.00 11996.36 4706.68 27644.28 00:16:33.925 [2024-04-17 14:35:42.374673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.374753] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.386675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.386741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.398654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.398720] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.410715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.410790] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.422689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.422742] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.434671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.434724] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.446700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.446763] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.458660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.458708] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.470687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.470757] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.482687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.482755] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.494690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.494741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.506694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.506732] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.925 [2024-04-17 14:35:42.518700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.925 [2024-04-17 14:35:42.518735] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.184 [2024-04-17 14:35:42.530770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.184 [2024-04-17 14:35:42.530854] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.184 [2024-04-17 14:35:42.542724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.184 [2024-04-17 14:35:42.542764] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.184 [2024-04-17 14:35:42.554693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.184 [2024-04-17 14:35:42.554720] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.184 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67621) - No such process 00:16:34.184 14:35:42 -- target/zcopy.sh@49 -- # wait 67621 00:16:34.184 14:35:42 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.184 14:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.184 14:35:42 -- common/autotest_common.sh@10 -- # set +x 00:16:34.184 14:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.184 14:35:42 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:34.184 14:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.184 14:35:42 -- common/autotest_common.sh@10 -- # set +x 00:16:34.184 delay0 00:16:34.184 14:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.184 14:35:42 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:34.184 14:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.184 14:35:42 -- common/autotest_common.sh@10 -- # set +x 00:16:34.184 14:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.184 14:35:42 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:34.184 [2024-04-17 14:35:42.756467] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:40.743 Initializing NVMe Controllers 00:16:40.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.743 Initialization complete. Launching workers. 00:16:40.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 87 00:16:40.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 374, failed to submit 33 00:16:40.743 success 248, unsuccess 126, failed 0 00:16:40.743 14:35:48 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:40.743 14:35:48 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:40.743 14:35:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:40.743 14:35:48 -- nvmf/common.sh@117 -- # sync 00:16:40.743 14:35:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:40.743 14:35:48 -- nvmf/common.sh@120 -- # set +e 00:16:40.743 14:35:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:40.743 14:35:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:40.743 rmmod nvme_tcp 00:16:40.743 rmmod nvme_fabrics 00:16:40.743 rmmod nvme_keyring 00:16:40.743 14:35:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:40.743 14:35:48 -- nvmf/common.sh@124 -- # set -e 00:16:40.743 14:35:48 -- nvmf/common.sh@125 -- # return 0 00:16:40.743 14:35:48 -- nvmf/common.sh@478 -- # '[' -n 67471 ']' 00:16:40.743 14:35:48 -- nvmf/common.sh@479 -- # killprocess 67471 00:16:40.743 14:35:48 -- common/autotest_common.sh@936 -- # '[' -z 67471 ']' 00:16:40.743 14:35:48 -- common/autotest_common.sh@940 -- # kill -0 67471 00:16:40.743 14:35:48 -- common/autotest_common.sh@941 -- # uname 00:16:40.743 14:35:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.743 14:35:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67471 00:16:40.743 14:35:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.743 14:35:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.743 killing process with pid 67471 00:16:40.743 14:35:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67471' 00:16:40.743 14:35:48 -- common/autotest_common.sh@955 -- # kill 67471 00:16:40.744 14:35:48 -- common/autotest_common.sh@960 -- # wait 67471 00:16:40.744 14:35:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:40.744 14:35:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:40.744 14:35:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:40.744 14:35:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.744 14:35:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.744 14:35:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.744 14:35:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.744 14:35:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.744 14:35:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:40.744 00:16:40.744 real 0m24.417s 00:16:40.744 user 0m39.832s 00:16:40.744 sys 0m6.729s 00:16:40.744 14:35:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:40.744 14:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:40.744 ************************************ 00:16:40.744 END TEST nvmf_zcopy 00:16:40.744 ************************************ 00:16:40.744 14:35:49 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:40.744 14:35:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:40.744 14:35:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.744 14:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:40.744 ************************************ 00:16:40.744 START TEST nvmf_nmic 00:16:40.744 ************************************ 00:16:40.744 14:35:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:40.744 * Looking for test storage... 00:16:41.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:41.003 14:35:49 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.003 14:35:49 -- nvmf/common.sh@7 -- # uname -s 00:16:41.003 14:35:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.003 14:35:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.003 14:35:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.003 14:35:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.003 14:35:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.003 14:35:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.003 14:35:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.003 14:35:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.003 14:35:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.003 14:35:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.003 14:35:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:16:41.003 14:35:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:16:41.003 14:35:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.003 14:35:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.003 14:35:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.003 14:35:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.003 14:35:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.003 14:35:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.003 14:35:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.003 14:35:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.003 14:35:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.003 14:35:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.003 14:35:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.003 14:35:49 -- paths/export.sh@5 -- # export PATH 00:16:41.003 14:35:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.003 14:35:49 -- nvmf/common.sh@47 -- # : 0 00:16:41.003 14:35:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.003 14:35:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.003 14:35:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.003 14:35:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.003 14:35:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.003 14:35:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.003 14:35:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.003 14:35:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.003 14:35:49 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.003 14:35:49 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.003 14:35:49 -- target/nmic.sh@14 -- # nvmftestinit 00:16:41.003 14:35:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:41.003 14:35:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.003 14:35:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:41.003 14:35:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:41.003 14:35:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:41.003 14:35:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.003 14:35:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.003 14:35:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.003 14:35:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:41.003 14:35:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:41.003 14:35:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:41.003 14:35:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:41.003 14:35:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:41.003 14:35:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:41.003 14:35:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.003 14:35:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.003 14:35:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:41.003 14:35:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:41.003 14:35:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.003 14:35:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.003 14:35:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.003 14:35:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.003 14:35:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.003 14:35:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.003 14:35:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.003 14:35:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.003 14:35:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:41.003 14:35:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:41.003 Cannot find device "nvmf_tgt_br" 00:16:41.003 14:35:49 -- nvmf/common.sh@155 -- # true 00:16:41.003 14:35:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.003 Cannot find device "nvmf_tgt_br2" 00:16:41.003 14:35:49 -- nvmf/common.sh@156 -- # true 00:16:41.003 14:35:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:41.003 14:35:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:41.003 Cannot find device "nvmf_tgt_br" 00:16:41.003 14:35:49 -- nvmf/common.sh@158 -- # true 00:16:41.003 14:35:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:41.003 Cannot find device "nvmf_tgt_br2" 00:16:41.003 14:35:49 -- nvmf/common.sh@159 -- # true 00:16:41.003 14:35:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:41.003 14:35:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:41.003 14:35:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.003 14:35:49 -- nvmf/common.sh@162 -- # true 00:16:41.003 14:35:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.003 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.003 14:35:49 -- nvmf/common.sh@163 -- # true 00:16:41.003 14:35:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.003 14:35:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.003 14:35:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:41.003 14:35:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:41.003 14:35:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:41.003 14:35:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:41.003 14:35:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:41.003 14:35:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:41.003 14:35:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:41.003 14:35:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:41.003 14:35:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:41.003 14:35:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:41.003 14:35:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:41.003 14:35:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.003 14:35:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:41.004 14:35:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:41.262 14:35:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:41.262 14:35:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:41.262 14:35:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:41.262 14:35:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:41.262 14:35:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:41.262 14:35:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:41.262 14:35:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:41.262 14:35:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:41.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:41.262 00:16:41.262 --- 10.0.0.2 ping statistics --- 00:16:41.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.262 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:41.262 14:35:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:41.262 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:41.262 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:16:41.262 00:16:41.262 --- 10.0.0.3 ping statistics --- 00:16:41.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.262 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:41.262 14:35:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:41.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:41.262 00:16:41.262 --- 10.0.0.1 ping statistics --- 00:16:41.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.262 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:41.262 14:35:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.262 14:35:49 -- nvmf/common.sh@422 -- # return 0 00:16:41.262 14:35:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:41.262 14:35:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.262 14:35:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:41.262 14:35:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:41.262 14:35:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.262 14:35:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:41.262 14:35:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:41.262 14:35:49 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:41.262 14:35:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:41.262 14:35:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:41.262 14:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.262 14:35:49 -- nvmf/common.sh@470 -- # nvmfpid=67944 00:16:41.262 14:35:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:41.262 14:35:49 -- nvmf/common.sh@471 -- # waitforlisten 67944 00:16:41.262 14:35:49 -- common/autotest_common.sh@817 -- # '[' -z 67944 ']' 00:16:41.262 14:35:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.262 14:35:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:41.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.262 14:35:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.262 14:35:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:41.262 14:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.262 [2024-04-17 14:35:49.775973] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:41.262 [2024-04-17 14:35:49.776090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.520 [2024-04-17 14:35:49.912862] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.520 [2024-04-17 14:35:49.976644] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.520 [2024-04-17 14:35:49.976698] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.520 [2024-04-17 14:35:49.976710] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.520 [2024-04-17 14:35:49.976718] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.520 [2024-04-17 14:35:49.976725] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.521 [2024-04-17 14:35:49.977616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.521 [2024-04-17 14:35:49.977761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.521 [2024-04-17 14:35:49.977861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.521 [2024-04-17 14:35:49.977872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.521 14:35:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:41.521 14:35:50 -- common/autotest_common.sh@850 -- # return 0 00:16:41.521 14:35:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:41.521 14:35:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:41.521 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.521 14:35:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.521 14:35:50 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.521 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.521 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.521 [2024-04-17 14:35:50.100006] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.521 14:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.521 14:35:50 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:41.521 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.521 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.780 Malloc0 00:16:41.780 14:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.780 14:35:50 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:41.780 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.780 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.780 14:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.780 14:35:50 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.780 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.780 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.780 14:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.780 14:35:50 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.780 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.780 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.780 [2024-04-17 14:35:50.164631] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.780 14:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.780 test case1: single bdev can't be used in multiple subsystems 00:16:41.780 14:35:50 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:41.780 14:35:50 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:41.780 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.780 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.780 14:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.780 14:35:50 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:41.780 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.780 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.780 14:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.780 14:35:50 -- target/nmic.sh@28 -- # nmic_status=0 00:16:41.780 14:35:50 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:41.780 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.780 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.780 [2024-04-17 14:35:50.192507] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:41.780 [2024-04-17 14:35:50.192558] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:41.780 [2024-04-17 14:35:50.192573] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.780 request: 00:16:41.780 { 00:16:41.780 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:41.780 "namespace": { 00:16:41.780 "bdev_name": "Malloc0", 00:16:41.780 "no_auto_visible": false 00:16:41.780 }, 00:16:41.780 "method": "nvmf_subsystem_add_ns", 00:16:41.780 "req_id": 1 00:16:41.780 } 00:16:41.780 Got JSON-RPC error response 00:16:41.780 response: 00:16:41.780 { 00:16:41.780 "code": -32602, 00:16:41.780 "message": "Invalid parameters" 00:16:41.780 } 00:16:41.780 14:35:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:41.780 14:35:50 -- target/nmic.sh@29 -- # nmic_status=1 00:16:41.780 14:35:50 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:41.780 Adding namespace failed - expected result. 00:16:41.780 14:35:50 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:41.780 test case2: host connect to nvmf target in multiple paths 00:16:41.780 14:35:50 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:41.780 14:35:50 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:41.780 14:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.780 14:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:41.780 [2024-04-17 14:35:50.204647] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:41.780 14:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.780 14:35:50 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 --hostid=c475d660-18c3-4238-bb35-f293e0cc1403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.780 14:35:50 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 --hostid=c475d660-18c3-4238-bb35-f293e0cc1403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:42.039 14:35:50 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:42.039 14:35:50 -- common/autotest_common.sh@1184 -- # local i=0 00:16:42.039 14:35:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.039 14:35:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:42.039 14:35:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:44.005 14:35:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:44.005 14:35:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:44.005 14:35:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.005 14:35:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:44.005 14:35:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.005 14:35:52 -- common/autotest_common.sh@1194 -- # return 0 00:16:44.005 14:35:52 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:44.005 [global] 00:16:44.005 thread=1 00:16:44.005 invalidate=1 00:16:44.005 rw=write 00:16:44.005 time_based=1 00:16:44.005 runtime=1 00:16:44.005 ioengine=libaio 00:16:44.005 direct=1 00:16:44.005 bs=4096 00:16:44.005 iodepth=1 00:16:44.005 norandommap=0 00:16:44.005 numjobs=1 00:16:44.005 00:16:44.005 verify_dump=1 00:16:44.005 verify_backlog=512 00:16:44.005 verify_state_save=0 00:16:44.005 do_verify=1 00:16:44.005 verify=crc32c-intel 00:16:44.005 [job0] 00:16:44.005 filename=/dev/nvme0n1 00:16:44.005 Could not set queue depth (nvme0n1) 00:16:44.263 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.263 fio-3.35 00:16:44.263 Starting 1 thread 00:16:45.198 00:16:45.198 job0: (groupid=0, jobs=1): err= 0: pid=68028: Wed Apr 17 14:35:53 2024 00:16:45.198 read: IOPS=2979, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:16:45.198 slat (nsec): min=12495, max=49116, avg=15604.14, stdev=2942.63 00:16:45.198 clat (usec): min=140, max=383, avg=180.58, stdev=15.78 00:16:45.198 lat (usec): min=157, max=413, avg=196.18, stdev=16.18 00:16:45.198 clat percentiles (usec): 00:16:45.198 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:16:45.198 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:16:45.198 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:16:45.198 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 255], 99.95th=[ 363], 00:16:45.198 | 99.99th=[ 383] 00:16:45.198 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:45.198 slat (usec): min=17, max=136, avg=22.61, stdev= 5.43 00:16:45.198 clat (usec): min=85, max=318, avg=108.76, stdev=12.48 00:16:45.198 lat (usec): min=106, max=392, avg=131.37, stdev=14.60 00:16:45.198 clat percentiles (usec): 00:16:45.198 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 100], 00:16:45.198 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:16:45.198 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 130], 00:16:45.198 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 202], 99.95th=[ 255], 00:16:45.198 | 99.99th=[ 318] 00:16:45.198 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:16:45.198 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:45.198 lat (usec) : 100=10.67%, 250=89.25%, 500=0.08% 00:16:45.198 cpu : usr=2.20%, sys=9.20%, ctx=6062, majf=0, minf=2 00:16:45.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.198 issued rwts: total=2982,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.198 00:16:45.198 Run status group 0 (all jobs): 00:16:45.198 READ: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=11.6MiB (12.2MB), run=1001-1001msec 00:16:45.198 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:16:45.198 00:16:45.198 Disk stats (read/write): 00:16:45.198 nvme0n1: ios=2610/2951, merge=0/0, ticks=504/346, in_queue=850, util=91.38% 00:16:45.198 14:35:53 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:45.457 14:35:53 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.457 14:35:53 -- common/autotest_common.sh@1205 -- # local i=0 00:16:45.457 14:35:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:45.457 14:35:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.457 14:35:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:45.457 14:35:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.457 14:35:53 -- common/autotest_common.sh@1217 -- # return 0 00:16:45.457 14:35:53 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:45.457 14:35:53 -- target/nmic.sh@53 -- # nvmftestfini 00:16:45.457 14:35:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:45.457 14:35:53 -- nvmf/common.sh@117 -- # sync 00:16:45.457 14:35:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.457 14:35:53 -- nvmf/common.sh@120 -- # set +e 00:16:45.457 14:35:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.457 14:35:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.457 rmmod nvme_tcp 00:16:45.457 rmmod nvme_fabrics 00:16:45.457 rmmod nvme_keyring 00:16:45.457 14:35:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.457 14:35:54 -- nvmf/common.sh@124 -- # set -e 00:16:45.457 14:35:54 -- nvmf/common.sh@125 -- # return 0 00:16:45.457 14:35:54 -- nvmf/common.sh@478 -- # '[' -n 67944 ']' 00:16:45.457 14:35:54 -- nvmf/common.sh@479 -- # killprocess 67944 00:16:45.457 14:35:54 -- common/autotest_common.sh@936 -- # '[' -z 67944 ']' 00:16:45.457 14:35:54 -- common/autotest_common.sh@940 -- # kill -0 67944 00:16:45.457 14:35:54 -- common/autotest_common.sh@941 -- # uname 00:16:45.457 14:35:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.457 14:35:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67944 00:16:45.716 14:35:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:45.716 killing process with pid 67944 00:16:45.716 14:35:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:45.716 14:35:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67944' 00:16:45.716 14:35:54 -- common/autotest_common.sh@955 -- # kill 67944 00:16:45.716 14:35:54 -- common/autotest_common.sh@960 -- # wait 67944 00:16:45.716 14:35:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:45.716 14:35:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:45.716 14:35:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:45.716 14:35:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.716 14:35:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.716 14:35:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.716 14:35:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.716 14:35:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.716 14:35:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:45.716 00:16:45.716 real 0m5.039s 00:16:45.716 user 0m15.729s 00:16:45.716 sys 0m2.083s 00:16:45.716 14:35:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:45.716 14:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.716 ************************************ 00:16:45.716 END TEST nvmf_nmic 00:16:45.716 ************************************ 00:16:45.975 14:35:54 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:45.975 14:35:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:45.975 14:35:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.975 14:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 ************************************ 00:16:45.975 START TEST nvmf_fio_target 00:16:45.975 ************************************ 00:16:45.975 14:35:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:45.975 * Looking for test storage... 00:16:45.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:45.975 14:35:54 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.975 14:35:54 -- nvmf/common.sh@7 -- # uname -s 00:16:45.975 14:35:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.975 14:35:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.975 14:35:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.975 14:35:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.975 14:35:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.975 14:35:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.975 14:35:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.975 14:35:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.975 14:35:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.975 14:35:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.975 14:35:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:16:45.975 14:35:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:16:45.975 14:35:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.976 14:35:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.976 14:35:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.976 14:35:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.976 14:35:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.976 14:35:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.976 14:35:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.976 14:35:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.976 14:35:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.976 14:35:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.976 14:35:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.976 14:35:54 -- paths/export.sh@5 -- # export PATH 00:16:45.976 14:35:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.976 14:35:54 -- nvmf/common.sh@47 -- # : 0 00:16:45.976 14:35:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.976 14:35:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.976 14:35:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.976 14:35:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.976 14:35:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.976 14:35:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.976 14:35:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.976 14:35:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.976 14:35:54 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.976 14:35:54 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.976 14:35:54 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.976 14:35:54 -- target/fio.sh@16 -- # nvmftestinit 00:16:45.976 14:35:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:45.976 14:35:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.976 14:35:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:45.976 14:35:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:45.976 14:35:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:45.976 14:35:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.976 14:35:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.976 14:35:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.976 14:35:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:45.976 14:35:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:45.976 14:35:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:45.976 14:35:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:45.976 14:35:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:45.976 14:35:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:45.976 14:35:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.976 14:35:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.976 14:35:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:45.976 14:35:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:45.976 14:35:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.976 14:35:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.976 14:35:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.976 14:35:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.976 14:35:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.976 14:35:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.976 14:35:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.976 14:35:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.976 14:35:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:45.976 14:35:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:46.235 Cannot find device "nvmf_tgt_br" 00:16:46.235 14:35:54 -- nvmf/common.sh@155 -- # true 00:16:46.235 14:35:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.235 Cannot find device "nvmf_tgt_br2" 00:16:46.235 14:35:54 -- nvmf/common.sh@156 -- # true 00:16:46.235 14:35:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:46.235 14:35:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:46.235 Cannot find device "nvmf_tgt_br" 00:16:46.235 14:35:54 -- nvmf/common.sh@158 -- # true 00:16:46.235 14:35:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:46.235 Cannot find device "nvmf_tgt_br2" 00:16:46.235 14:35:54 -- nvmf/common.sh@159 -- # true 00:16:46.235 14:35:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:46.235 14:35:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:46.235 14:35:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.235 14:35:54 -- nvmf/common.sh@162 -- # true 00:16:46.235 14:35:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.235 14:35:54 -- nvmf/common.sh@163 -- # true 00:16:46.235 14:35:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.235 14:35:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.235 14:35:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.235 14:35:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.235 14:35:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.235 14:35:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.235 14:35:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.235 14:35:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:46.235 14:35:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:46.235 14:35:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:46.235 14:35:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:46.235 14:35:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:46.235 14:35:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:46.235 14:35:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.494 14:35:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.494 14:35:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.494 14:35:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:46.494 14:35:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:46.494 14:35:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.494 14:35:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.494 14:35:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.494 14:35:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.494 14:35:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.494 14:35:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:46.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:16:46.494 00:16:46.494 --- 10.0.0.2 ping statistics --- 00:16:46.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.494 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:46.494 14:35:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:46.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:46.494 00:16:46.494 --- 10.0.0.3 ping statistics --- 00:16:46.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.494 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:46.494 14:35:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:46.494 00:16:46.494 --- 10.0.0.1 ping statistics --- 00:16:46.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.494 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:46.494 14:35:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.494 14:35:54 -- nvmf/common.sh@422 -- # return 0 00:16:46.494 14:35:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:46.494 14:35:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.494 14:35:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:46.494 14:35:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:46.494 14:35:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.494 14:35:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:46.494 14:35:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:46.494 14:35:54 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:46.494 14:35:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:46.494 14:35:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:46.494 14:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:46.494 14:35:54 -- nvmf/common.sh@470 -- # nvmfpid=68212 00:16:46.494 14:35:54 -- nvmf/common.sh@471 -- # waitforlisten 68212 00:16:46.494 14:35:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.494 14:35:54 -- common/autotest_common.sh@817 -- # '[' -z 68212 ']' 00:16:46.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.494 14:35:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.494 14:35:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.494 14:35:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.494 14:35:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.494 14:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:46.494 [2024-04-17 14:35:55.008001] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:46.494 [2024-04-17 14:35:55.008098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.780 [2024-04-17 14:35:55.145942] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.780 [2024-04-17 14:35:55.224055] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.780 [2024-04-17 14:35:55.224128] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.781 [2024-04-17 14:35:55.224145] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.781 [2024-04-17 14:35:55.224158] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.781 [2024-04-17 14:35:55.224171] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.781 [2024-04-17 14:35:55.224987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.781 [2024-04-17 14:35:55.225153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.781 [2024-04-17 14:35:55.225089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.781 [2024-04-17 14:35:55.225145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.781 14:35:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:46.781 14:35:55 -- common/autotest_common.sh@850 -- # return 0 00:16:46.781 14:35:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:46.781 14:35:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:46.781 14:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.781 14:35:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.781 14:35:55 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:47.365 [2024-04-17 14:35:55.670487] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.365 14:35:55 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.623 14:35:56 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:47.623 14:35:56 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.881 14:35:56 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:47.881 14:35:56 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.139 14:35:56 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:48.139 14:35:56 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.705 14:35:57 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:48.705 14:35:57 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:48.962 14:35:57 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.220 14:35:57 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:49.220 14:35:57 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.477 14:35:58 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:49.477 14:35:58 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.735 14:35:58 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:49.735 14:35:58 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:49.994 14:35:58 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:50.253 14:35:58 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:50.253 14:35:58 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:50.511 14:35:59 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:50.511 14:35:59 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:50.770 14:35:59 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.033 [2024-04-17 14:35:59.566501] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.033 14:35:59 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:51.305 14:35:59 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:51.871 14:36:00 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 --hostid=c475d660-18c3-4238-bb35-f293e0cc1403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.871 14:36:00 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:51.871 14:36:00 -- common/autotest_common.sh@1184 -- # local i=0 00:16:51.871 14:36:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.871 14:36:00 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:51.871 14:36:00 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:51.871 14:36:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:53.772 14:36:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:53.772 14:36:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:53.772 14:36:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.772 14:36:02 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:53.772 14:36:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.772 14:36:02 -- common/autotest_common.sh@1194 -- # return 0 00:16:53.772 14:36:02 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:53.772 [global] 00:16:53.772 thread=1 00:16:53.772 invalidate=1 00:16:53.772 rw=write 00:16:53.772 time_based=1 00:16:53.772 runtime=1 00:16:53.772 ioengine=libaio 00:16:53.772 direct=1 00:16:53.772 bs=4096 00:16:53.772 iodepth=1 00:16:53.772 norandommap=0 00:16:53.772 numjobs=1 00:16:53.772 00:16:53.772 verify_dump=1 00:16:53.772 verify_backlog=512 00:16:53.772 verify_state_save=0 00:16:53.772 do_verify=1 00:16:53.772 verify=crc32c-intel 00:16:53.772 [job0] 00:16:53.772 filename=/dev/nvme0n1 00:16:53.772 [job1] 00:16:53.772 filename=/dev/nvme0n2 00:16:53.772 [job2] 00:16:53.772 filename=/dev/nvme0n3 00:16:53.772 [job3] 00:16:53.772 filename=/dev/nvme0n4 00:16:54.031 Could not set queue depth (nvme0n1) 00:16:54.031 Could not set queue depth (nvme0n2) 00:16:54.031 Could not set queue depth (nvme0n3) 00:16:54.031 Could not set queue depth (nvme0n4) 00:16:54.031 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.031 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.031 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.031 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.031 fio-3.35 00:16:54.031 Starting 4 threads 00:16:55.406 00:16:55.406 job0: (groupid=0, jobs=1): err= 0: pid=68401: Wed Apr 17 14:36:03 2024 00:16:55.406 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:55.406 slat (nsec): min=11511, max=49378, avg=16294.85, stdev=4565.89 00:16:55.406 clat (usec): min=134, max=584, avg=193.11, stdev=60.25 00:16:55.406 lat (usec): min=147, max=600, avg=209.40, stdev=61.79 00:16:55.406 clat percentiles (usec): 00:16:55.406 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:16:55.406 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 188], 00:16:55.406 | 70.00th=[ 196], 80.00th=[ 208], 90.00th=[ 235], 95.00th=[ 269], 00:16:55.406 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 570], 00:16:55.406 | 99.99th=[ 586] 00:16:55.406 write: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:16:55.406 slat (usec): min=14, max=109, avg=24.85, stdev= 6.72 00:16:55.406 clat (usec): min=92, max=409, avg=125.69, stdev=18.75 00:16:55.406 lat (usec): min=111, max=430, avg=150.55, stdev=20.07 00:16:55.406 clat percentiles (usec): 00:16:55.406 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 112], 00:16:55.406 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 127], 00:16:55.406 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 149], 95.00th=[ 161], 00:16:55.406 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 223], 99.95th=[ 251], 00:16:55.406 | 99.99th=[ 412] 00:16:55.406 bw ( KiB/s): min=12263, max=12263, per=40.86%, avg=12263.00, stdev= 0.00, samples=1 00:16:55.406 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:16:55.406 lat (usec) : 100=1.78%, 250=95.04%, 500=2.39%, 750=0.79% 00:16:55.406 cpu : usr=2.00%, sys=9.70%, ctx=5565, majf=0, minf=11 00:16:55.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.406 issued rwts: total=2560,3005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.406 job1: (groupid=0, jobs=1): err= 0: pid=68402: Wed Apr 17 14:36:03 2024 00:16:55.406 read: IOPS=1262, BW=5051KiB/s (5172kB/s)(5056KiB/1001msec) 00:16:55.406 slat (nsec): min=9549, max=64545, avg=21080.04, stdev=7383.49 00:16:55.406 clat (usec): min=247, max=1178, avg=412.76, stdev=114.71 00:16:55.406 lat (usec): min=264, max=1207, avg=433.84, stdev=115.57 00:16:55.406 clat percentiles (usec): 00:16:55.406 | 1.00th=[ 277], 5.00th=[ 310], 10.00th=[ 322], 20.00th=[ 334], 00:16:55.406 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 375], 60.00th=[ 396], 00:16:55.406 | 70.00th=[ 429], 80.00th=[ 474], 90.00th=[ 553], 95.00th=[ 644], 00:16:55.406 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 1106], 99.95th=[ 1172], 00:16:55.406 | 99.99th=[ 1172] 00:16:55.406 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:55.406 slat (nsec): min=14873, max=82273, avg=27305.28, stdev=8595.68 00:16:55.406 clat (usec): min=114, max=1174, avg=262.53, stdev=57.17 00:16:55.406 lat (usec): min=187, max=1202, avg=289.84, stdev=57.82 00:16:55.406 clat percentiles (usec): 00:16:55.406 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 219], 00:16:55.406 | 30.00th=[ 231], 40.00th=[ 243], 50.00th=[ 255], 60.00th=[ 265], 00:16:55.406 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 359], 00:16:55.406 | 99.00th=[ 416], 99.50th=[ 482], 99.90th=[ 701], 99.95th=[ 1172], 00:16:55.406 | 99.99th=[ 1172] 00:16:55.406 bw ( KiB/s): min= 7904, max= 7904, per=26.34%, avg=7904.00, stdev= 0.00, samples=1 00:16:55.406 iops : min= 1976, max= 1976, avg=1976.00, stdev= 0.00, samples=1 00:16:55.406 lat (usec) : 250=25.07%, 500=68.00%, 750=5.71%, 1000=0.96% 00:16:55.406 lat (msec) : 2=0.25% 00:16:55.406 cpu : usr=2.10%, sys=5.30%, ctx=2800, majf=0, minf=8 00:16:55.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.406 issued rwts: total=1264,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.406 job2: (groupid=0, jobs=1): err= 0: pid=68403: Wed Apr 17 14:36:03 2024 00:16:55.406 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:55.406 slat (nsec): min=15469, max=89571, avg=29952.40, stdev=10318.38 00:16:55.406 clat (usec): min=200, max=2744, avg=455.75, stdev=145.74 00:16:55.406 lat (usec): min=224, max=2779, avg=485.70, stdev=147.65 00:16:55.406 clat percentiles (usec): 00:16:55.406 | 1.00th=[ 277], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:16:55.406 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 408], 60.00th=[ 498], 00:16:55.406 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 594], 95.00th=[ 701], 00:16:55.406 | 99.00th=[ 840], 99.50th=[ 955], 99.90th=[ 1582], 99.95th=[ 2737], 00:16:55.406 | 99.99th=[ 2737] 00:16:55.406 write: IOPS=1431, BW=5726KiB/s (5864kB/s)(5732KiB/1001msec); 0 zone resets 00:16:55.406 slat (usec): min=21, max=136, avg=41.28, stdev=14.83 00:16:55.406 clat (usec): min=107, max=866, avg=304.11, stdev=138.45 00:16:55.406 lat (usec): min=139, max=899, avg=345.39, stdev=146.80 00:16:55.406 clat percentiles (usec): 00:16:55.406 | 1.00th=[ 116], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 151], 00:16:55.406 | 30.00th=[ 221], 40.00th=[ 265], 50.00th=[ 302], 60.00th=[ 318], 00:16:55.406 | 70.00th=[ 343], 80.00th=[ 429], 90.00th=[ 502], 95.00th=[ 570], 00:16:55.406 | 99.00th=[ 644], 99.50th=[ 701], 99.90th=[ 848], 99.95th=[ 865], 00:16:55.406 | 99.99th=[ 865] 00:16:55.406 bw ( KiB/s): min= 4630, max= 4630, per=15.43%, avg=4630.00, stdev= 0.00, samples=1 00:16:55.406 iops : min= 1157, max= 1157, avg=1157.00, stdev= 0.00, samples=1 00:16:55.406 lat (usec) : 250=22.55%, 500=55.31%, 750=21.20%, 1000=0.73% 00:16:55.406 lat (msec) : 2=0.16%, 4=0.04% 00:16:55.406 cpu : usr=2.30%, sys=6.60%, ctx=2460, majf=0, minf=17 00:16:55.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 issued rwts: total=1024,1433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.407 job3: (groupid=0, jobs=1): err= 0: pid=68404: Wed Apr 17 14:36:03 2024 00:16:55.407 read: IOPS=1262, BW=5051KiB/s (5172kB/s)(5056KiB/1001msec) 00:16:55.407 slat (nsec): min=9195, max=69266, avg=21872.14, stdev=7023.82 00:16:55.407 clat (usec): min=250, max=1208, avg=411.66, stdev=114.80 00:16:55.407 lat (usec): min=261, max=1226, avg=433.54, stdev=115.33 00:16:55.407 clat percentiles (usec): 00:16:55.407 | 1.00th=[ 277], 5.00th=[ 310], 10.00th=[ 322], 20.00th=[ 334], 00:16:55.407 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 375], 60.00th=[ 392], 00:16:55.407 | 70.00th=[ 424], 80.00th=[ 469], 90.00th=[ 545], 95.00th=[ 644], 00:16:55.407 | 99.00th=[ 840], 99.50th=[ 914], 99.90th=[ 1123], 99.95th=[ 1205], 00:16:55.407 | 99.99th=[ 1205] 00:16:55.407 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:55.407 slat (usec): min=19, max=110, avg=33.05, stdev= 7.95 00:16:55.407 clat (usec): min=154, max=1157, avg=256.58, stdev=56.42 00:16:55.407 lat (usec): min=191, max=1196, avg=289.63, stdev=57.71 00:16:55.407 clat percentiles (usec): 00:16:55.407 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 215], 00:16:55.407 | 30.00th=[ 225], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 260], 00:16:55.407 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 322], 95.00th=[ 355], 00:16:55.407 | 99.00th=[ 420], 99.50th=[ 478], 99.90th=[ 742], 99.95th=[ 1156], 00:16:55.407 | 99.99th=[ 1156] 00:16:55.407 bw ( KiB/s): min= 7896, max= 7896, per=26.31%, avg=7896.00, stdev= 0.00, samples=1 00:16:55.407 iops : min= 1974, max= 1974, avg=1974.00, stdev= 0.00, samples=1 00:16:55.407 lat (usec) : 250=28.00%, 500=65.25%, 750=5.50%, 1000=1.00% 00:16:55.407 lat (msec) : 2=0.25% 00:16:55.407 cpu : usr=1.80%, sys=6.70%, ctx=2802, majf=0, minf=1 00:16:55.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.407 issued rwts: total=1264,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.407 00:16:55.407 Run status group 0 (all jobs): 00:16:55.407 READ: bw=23.9MiB/s (25.0MB/s), 4092KiB/s-9.99MiB/s (4190kB/s-10.5MB/s), io=23.9MiB (25.0MB), run=1001-1001msec 00:16:55.407 WRITE: bw=29.3MiB/s (30.7MB/s), 5726KiB/s-11.7MiB/s (5864kB/s-12.3MB/s), io=29.3MiB (30.8MB), run=1001-1001msec 00:16:55.407 00:16:55.407 Disk stats (read/write): 00:16:55.407 nvme0n1: ios=2208/2560, merge=0/0, ticks=460/350, in_queue=810, util=88.58% 00:16:55.407 nvme0n2: ios=1073/1440, merge=0/0, ticks=419/328, in_queue=747, util=88.97% 00:16:55.407 nvme0n3: ios=975/1024, merge=0/0, ticks=443/368, in_queue=811, util=89.19% 00:16:55.407 nvme0n4: ios=1024/1442, merge=0/0, ticks=396/370, in_queue=766, util=89.75% 00:16:55.407 14:36:03 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:55.407 [global] 00:16:55.407 thread=1 00:16:55.407 invalidate=1 00:16:55.407 rw=randwrite 00:16:55.407 time_based=1 00:16:55.407 runtime=1 00:16:55.407 ioengine=libaio 00:16:55.407 direct=1 00:16:55.407 bs=4096 00:16:55.407 iodepth=1 00:16:55.407 norandommap=0 00:16:55.407 numjobs=1 00:16:55.407 00:16:55.407 verify_dump=1 00:16:55.407 verify_backlog=512 00:16:55.407 verify_state_save=0 00:16:55.407 do_verify=1 00:16:55.407 verify=crc32c-intel 00:16:55.407 [job0] 00:16:55.407 filename=/dev/nvme0n1 00:16:55.407 [job1] 00:16:55.407 filename=/dev/nvme0n2 00:16:55.407 [job2] 00:16:55.407 filename=/dev/nvme0n3 00:16:55.407 [job3] 00:16:55.407 filename=/dev/nvme0n4 00:16:55.407 Could not set queue depth (nvme0n1) 00:16:55.407 Could not set queue depth (nvme0n2) 00:16:55.407 Could not set queue depth (nvme0n3) 00:16:55.407 Could not set queue depth (nvme0n4) 00:16:55.407 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.407 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.407 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.407 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.407 fio-3.35 00:16:55.407 Starting 4 threads 00:16:56.782 00:16:56.782 job0: (groupid=0, jobs=1): err= 0: pid=68457: Wed Apr 17 14:36:05 2024 00:16:56.782 read: IOPS=2670, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:16:56.782 slat (usec): min=11, max=103, avg=17.24, stdev= 5.85 00:16:56.782 clat (usec): min=110, max=2404, avg=172.32, stdev=46.01 00:16:56.782 lat (usec): min=153, max=2428, avg=189.56, stdev=46.57 00:16:56.782 clat percentiles (usec): 00:16:56.782 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:16:56.782 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:16:56.782 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 198], 00:16:56.782 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 289], 99.95th=[ 506], 00:16:56.782 | 99.99th=[ 2409] 00:16:56.782 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:56.782 slat (usec): min=15, max=117, avg=26.27, stdev= 8.29 00:16:56.782 clat (usec): min=59, max=379, avg=129.99, stdev=15.64 00:16:56.782 lat (usec): min=118, max=402, avg=156.26, stdev=18.58 00:16:56.782 clat percentiles (usec): 00:16:56.782 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:16:56.782 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 131], 00:16:56.782 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 155], 00:16:56.782 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 233], 99.95th=[ 355], 00:16:56.782 | 99.99th=[ 379] 00:16:56.782 bw ( KiB/s): min=12288, max=12288, per=27.46%, avg=12288.00, stdev= 0.00, samples=1 00:16:56.782 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:56.782 lat (usec) : 100=0.12%, 250=99.70%, 500=0.14%, 750=0.02% 00:16:56.782 lat (msec) : 4=0.02% 00:16:56.782 cpu : usr=2.80%, sys=10.20%, ctx=5749, majf=0, minf=13 00:16:56.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.782 issued rwts: total=2673,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.782 job1: (groupid=0, jobs=1): err= 0: pid=68458: Wed Apr 17 14:36:05 2024 00:16:56.782 read: IOPS=2245, BW=8983KiB/s (9199kB/s)(8992KiB/1001msec) 00:16:56.782 slat (nsec): min=9191, max=42602, avg=14233.57, stdev=3728.81 00:16:56.782 clat (usec): min=139, max=7345, avg=220.02, stdev=179.67 00:16:56.782 lat (usec): min=153, max=7358, avg=234.25, stdev=179.60 00:16:56.782 clat percentiles (usec): 00:16:56.782 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:16:56.782 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 206], 60.00th=[ 221], 00:16:56.782 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:16:56.782 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 1975], 99.95th=[ 3621], 00:16:56.782 | 99.99th=[ 7373] 00:16:56.782 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:56.782 slat (nsec): min=11298, max=88850, avg=19741.07, stdev=5433.70 00:16:56.782 clat (usec): min=96, max=1602, avg=161.98, stdev=55.58 00:16:56.782 lat (usec): min=115, max=1624, avg=181.72, stdev=55.15 00:16:56.782 clat percentiles (usec): 00:16:56.782 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 128], 00:16:56.782 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 159], 00:16:56.782 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 223], 00:16:56.782 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 1106], 99.95th=[ 1352], 00:16:56.782 | 99.99th=[ 1598] 00:16:56.782 bw ( KiB/s): min=12288, max=12288, per=27.46%, avg=12288.00, stdev= 0.00, samples=1 00:16:56.782 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:56.782 lat (usec) : 100=0.06%, 250=85.46%, 500=14.18%, 750=0.15%, 1000=0.02% 00:16:56.782 lat (msec) : 2=0.08%, 4=0.02%, 10=0.02% 00:16:56.782 cpu : usr=1.80%, sys=6.60%, ctx=4811, majf=0, minf=7 00:16:56.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.782 issued rwts: total=2248,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.782 job2: (groupid=0, jobs=1): err= 0: pid=68459: Wed Apr 17 14:36:05 2024 00:16:56.782 read: IOPS=2287, BW=9151KiB/s (9370kB/s)(9160KiB/1001msec) 00:16:56.782 slat (nsec): min=9874, max=52728, avg=15474.64, stdev=3994.64 00:16:56.782 clat (usec): min=153, max=1715, avg=214.67, stdev=59.56 00:16:56.782 lat (usec): min=166, max=1729, avg=230.15, stdev=60.18 00:16:56.782 clat percentiles (usec): 00:16:56.782 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:16:56.782 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 206], 00:16:56.782 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 293], 00:16:56.782 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 424], 99.95th=[ 979], 00:16:56.782 | 99.99th=[ 1713] 00:16:56.782 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:56.782 slat (nsec): min=14473, max=91993, avg=21396.64, stdev=5145.93 00:16:56.782 clat (usec): min=107, max=696, avg=159.98, stdev=32.58 00:16:56.782 lat (usec): min=125, max=715, avg=181.37, stdev=33.22 00:16:56.782 clat percentiles (usec): 00:16:56.782 | 1.00th=[ 117], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 133], 00:16:56.782 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 159], 00:16:56.782 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 215], 00:16:56.782 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 265], 99.95th=[ 273], 00:16:56.782 | 99.99th=[ 693] 00:16:56.782 bw ( KiB/s): min=12288, max=12288, per=27.46%, avg=12288.00, stdev= 0.00, samples=1 00:16:56.782 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:56.782 lat (usec) : 250=85.98%, 500=13.96%, 750=0.02%, 1000=0.02% 00:16:56.782 lat (msec) : 2=0.02% 00:16:56.782 cpu : usr=2.30%, sys=6.90%, ctx=4851, majf=0, minf=10 00:16:56.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.782 issued rwts: total=2290,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.782 job3: (groupid=0, jobs=1): err= 0: pid=68460: Wed Apr 17 14:36:05 2024 00:16:56.782 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:56.782 slat (nsec): min=11678, max=45393, avg=14378.98, stdev=2848.25 00:16:56.782 clat (usec): min=148, max=783, avg=181.25, stdev=26.22 00:16:56.782 lat (usec): min=161, max=797, avg=195.63, stdev=26.65 00:16:56.782 clat percentiles (usec): 00:16:56.782 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:16:56.782 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:16:56.782 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 217], 00:16:56.782 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 578], 99.95th=[ 586], 00:16:56.782 | 99.99th=[ 783] 00:16:56.782 write: IOPS=3004, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1001msec); 0 zone resets 00:16:56.782 slat (nsec): min=14217, max=98633, avg=22440.63, stdev=5940.06 00:16:56.782 clat (usec): min=105, max=813, avg=140.31, stdev=29.16 00:16:56.782 lat (usec): min=124, max=833, avg=162.75, stdev=30.45 00:16:56.782 clat percentiles (usec): 00:16:56.782 | 1.00th=[ 112], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 128], 00:16:56.782 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:16:56.782 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 165], 00:16:56.782 | 99.00th=[ 198], 99.50th=[ 297], 99.90th=[ 523], 99.95th=[ 660], 00:16:56.782 | 99.99th=[ 816] 00:16:56.782 bw ( KiB/s): min=12288, max=12288, per=27.46%, avg=12288.00, stdev= 0.00, samples=1 00:16:56.782 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:56.782 lat (usec) : 250=98.99%, 500=0.86%, 750=0.11%, 1000=0.04% 00:16:56.782 cpu : usr=2.30%, sys=8.20%, ctx=5568, majf=0, minf=15 00:16:56.782 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.782 issued rwts: total=2560,3008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.782 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.782 00:16:56.782 Run status group 0 (all jobs): 00:16:56.782 READ: bw=38.1MiB/s (40.0MB/s), 8983KiB/s-10.4MiB/s (9199kB/s-10.9MB/s), io=38.2MiB (40.0MB), run=1001-1001msec 00:16:56.782 WRITE: bw=43.7MiB/s (45.8MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=43.8MiB (45.9MB), run=1001-1001msec 00:16:56.782 00:16:56.782 Disk stats (read/write): 00:16:56.782 nvme0n1: ios=2392/2560, merge=0/0, ticks=433/357, in_queue=790, util=87.88% 00:16:56.782 nvme0n2: ios=2097/2181, merge=0/0, ticks=452/341, in_queue=793, util=87.87% 00:16:56.782 nvme0n3: ios=2048/2262, merge=0/0, ticks=429/369, in_queue=798, util=89.22% 00:16:56.782 nvme0n4: ios=2208/2560, merge=0/0, ticks=412/378, in_queue=790, util=89.78% 00:16:56.782 14:36:05 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:56.782 [global] 00:16:56.782 thread=1 00:16:56.782 invalidate=1 00:16:56.782 rw=write 00:16:56.782 time_based=1 00:16:56.782 runtime=1 00:16:56.782 ioengine=libaio 00:16:56.782 direct=1 00:16:56.782 bs=4096 00:16:56.782 iodepth=128 00:16:56.782 norandommap=0 00:16:56.782 numjobs=1 00:16:56.782 00:16:56.782 verify_dump=1 00:16:56.782 verify_backlog=512 00:16:56.782 verify_state_save=0 00:16:56.782 do_verify=1 00:16:56.782 verify=crc32c-intel 00:16:56.782 [job0] 00:16:56.782 filename=/dev/nvme0n1 00:16:56.782 [job1] 00:16:56.782 filename=/dev/nvme0n2 00:16:56.782 [job2] 00:16:56.782 filename=/dev/nvme0n3 00:16:56.782 [job3] 00:16:56.782 filename=/dev/nvme0n4 00:16:56.782 Could not set queue depth (nvme0n1) 00:16:56.782 Could not set queue depth (nvme0n2) 00:16:56.782 Could not set queue depth (nvme0n3) 00:16:56.782 Could not set queue depth (nvme0n4) 00:16:56.782 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.782 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.782 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.782 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.782 fio-3.35 00:16:56.782 Starting 4 threads 00:16:58.156 00:16:58.156 job0: (groupid=0, jobs=1): err= 0: pid=68520: Wed Apr 17 14:36:06 2024 00:16:58.156 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:16:58.156 slat (usec): min=8, max=10762, avg=291.21, stdev=1128.20 00:16:58.156 clat (usec): min=20397, max=61728, avg=37417.16, stdev=9522.59 00:16:58.156 lat (usec): min=20423, max=62849, avg=37708.37, stdev=9582.93 00:16:58.156 clat percentiles (usec): 00:16:58.156 | 1.00th=[22152], 5.00th=[23725], 10.00th=[25297], 20.00th=[26346], 00:16:58.156 | 30.00th=[28967], 40.00th=[36963], 50.00th=[39060], 60.00th=[41157], 00:16:58.156 | 70.00th=[42730], 80.00th=[45351], 90.00th=[49546], 95.00th=[53740], 00:16:58.156 | 99.00th=[56886], 99.50th=[59507], 99.90th=[60556], 99.95th=[61604], 00:16:58.156 | 99.99th=[61604] 00:16:58.156 write: IOPS=1949, BW=7797KiB/s (7984kB/s)(7844KiB/1006msec); 0 zone resets 00:16:58.156 slat (usec): min=11, max=9802, avg=274.69, stdev=972.87 00:16:58.156 clat (usec): min=3629, max=91163, avg=35014.10, stdev=16253.54 00:16:58.156 lat (usec): min=8130, max=91199, avg=35288.79, stdev=16349.46 00:16:58.156 clat percentiles (usec): 00:16:58.156 | 1.00th=[12125], 5.00th=[18482], 10.00th=[21627], 20.00th=[24249], 00:16:58.156 | 30.00th=[25822], 40.00th=[27395], 50.00th=[30802], 60.00th=[32113], 00:16:58.156 | 70.00th=[36439], 80.00th=[39584], 90.00th=[65799], 95.00th=[73925], 00:16:58.156 | 99.00th=[84411], 99.50th=[85459], 99.90th=[87557], 99.95th=[90702], 00:16:58.156 | 99.99th=[90702] 00:16:58.156 bw ( KiB/s): min= 6472, max= 8192, per=13.83%, avg=7332.00, stdev=1216.22, samples=2 00:16:58.156 iops : min= 1618, max= 2048, avg=1833.00, stdev=304.06, samples=2 00:16:58.156 lat (msec) : 4=0.03%, 10=0.23%, 20=3.75%, 50=84.19%, 100=11.81% 00:16:58.156 cpu : usr=1.79%, sys=6.07%, ctx=538, majf=0, minf=11 00:16:58.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:58.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.156 issued rwts: total=1536,1961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.156 job1: (groupid=0, jobs=1): err= 0: pid=68521: Wed Apr 17 14:36:06 2024 00:16:58.156 read: IOPS=3671, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1008msec) 00:16:58.156 slat (usec): min=4, max=14930, avg=134.04, stdev=658.36 00:16:58.156 clat (usec): min=3238, max=28627, avg=17825.18, stdev=5065.47 00:16:58.156 lat (usec): min=7768, max=28643, avg=17959.22, stdev=5086.42 00:16:58.156 clat percentiles (usec): 00:16:58.156 | 1.00th=[ 8160], 5.00th=[11863], 10.00th=[12518], 20.00th=[12780], 00:16:58.156 | 30.00th=[13304], 40.00th=[15664], 50.00th=[16581], 60.00th=[19530], 00:16:58.156 | 70.00th=[21627], 80.00th=[23462], 90.00th=[25035], 95.00th=[25822], 00:16:58.156 | 99.00th=[26608], 99.50th=[27657], 99.90th=[28443], 99.95th=[28705], 00:16:58.156 | 99.99th=[28705] 00:16:58.156 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:16:58.156 slat (usec): min=6, max=10470, avg=116.62, stdev=637.62 00:16:58.156 clat (usec): min=3502, max=31209, avg=15080.55, stdev=5103.09 00:16:58.156 lat (usec): min=7651, max=31237, avg=15197.17, stdev=5106.77 00:16:58.156 clat percentiles (usec): 00:16:58.156 | 1.00th=[ 8094], 5.00th=[10552], 10.00th=[11207], 20.00th=[11600], 00:16:58.156 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13042], 60.00th=[14091], 00:16:58.156 | 70.00th=[16450], 80.00th=[17695], 90.00th=[22414], 95.00th=[27395], 00:16:58.156 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:16:58.156 | 99.99th=[31327] 00:16:58.156 bw ( KiB/s): min=15104, max=17576, per=30.82%, avg=16340.00, stdev=1747.97, samples=2 00:16:58.156 iops : min= 3776, max= 4394, avg=4085.00, stdev=436.99, samples=2 00:16:58.156 lat (msec) : 4=0.03%, 10=2.95%, 20=71.28%, 50=25.74% 00:16:58.156 cpu : usr=3.28%, sys=11.02%, ctx=271, majf=0, minf=11 00:16:58.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:58.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.156 issued rwts: total=3701,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.156 job2: (groupid=0, jobs=1): err= 0: pid=68522: Wed Apr 17 14:36:06 2024 00:16:58.156 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:16:58.156 slat (usec): min=5, max=7278, avg=95.18, stdev=439.09 00:16:58.156 clat (usec): min=8922, max=23478, avg=12609.75, stdev=1606.19 00:16:58.156 lat (usec): min=8951, max=23507, avg=12704.93, stdev=1628.60 00:16:58.156 clat percentiles (usec): 00:16:58.156 | 1.00th=[10028], 5.00th=[10683], 10.00th=[10945], 20.00th=[11469], 00:16:58.156 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:16:58.156 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14615], 95.00th=[15008], 00:16:58.156 | 99.00th=[19006], 99.50th=[19006], 99.90th=[21103], 99.95th=[21103], 00:16:58.156 | 99.99th=[23462] 00:16:58.156 write: IOPS=5235, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec); 0 zone resets 00:16:58.156 slat (usec): min=9, max=7545, avg=89.30, stdev=511.28 00:16:58.156 clat (usec): min=3139, max=21176, avg=11861.96, stdev=1786.69 00:16:58.156 lat (usec): min=3157, max=21201, avg=11951.26, stdev=1854.59 00:16:58.156 clat percentiles (usec): 00:16:58.156 | 1.00th=[ 7242], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10552], 00:16:58.156 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11731], 60.00th=[12387], 00:16:58.156 | 70.00th=[12780], 80.00th=[13435], 90.00th=[13960], 95.00th=[14484], 00:16:58.156 | 99.00th=[15401], 99.50th=[17171], 99.90th=[18744], 99.95th=[19530], 00:16:58.156 | 99.99th=[21103] 00:16:58.156 bw ( KiB/s): min=20480, max=20600, per=38.74%, avg=20540.00, stdev=84.85, samples=2 00:16:58.156 iops : min= 5120, max= 5150, avg=5135.00, stdev=21.21, samples=2 00:16:58.156 lat (msec) : 4=0.35%, 10=4.22%, 20=95.36%, 50=0.07% 00:16:58.156 cpu : usr=5.08%, sys=14.86%, ctx=331, majf=0, minf=11 00:16:58.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:58.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.157 issued rwts: total=5120,5256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.157 job3: (groupid=0, jobs=1): err= 0: pid=68523: Wed Apr 17 14:36:06 2024 00:16:58.157 read: IOPS=1649, BW=6599KiB/s (6758kB/s)(6652KiB/1008msec) 00:16:58.157 slat (usec): min=5, max=14920, avg=298.83, stdev=1155.53 00:16:58.157 clat (usec): min=1543, max=63704, avg=35379.15, stdev=10127.99 00:16:58.157 lat (usec): min=10124, max=63725, avg=35677.98, stdev=10188.36 00:16:58.157 clat percentiles (usec): 00:16:58.157 | 1.00th=[13435], 5.00th=[19792], 10.00th=[24773], 20.00th=[26084], 00:16:58.157 | 30.00th=[29230], 40.00th=[31851], 50.00th=[33424], 60.00th=[37487], 00:16:58.157 | 70.00th=[41681], 80.00th=[45876], 90.00th=[49546], 95.00th=[51643], 00:16:58.157 | 99.00th=[56361], 99.50th=[58983], 99.90th=[62129], 99.95th=[63701], 00:16:58.157 | 99.99th=[63701] 00:16:58.157 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:16:58.157 slat (usec): min=12, max=9311, avg=240.29, stdev=914.82 00:16:58.157 clat (usec): min=12239, max=90165, avg=33360.80, stdev=17081.68 00:16:58.157 lat (usec): min=13487, max=90198, avg=33601.09, stdev=17190.92 00:16:58.157 clat percentiles (usec): 00:16:58.157 | 1.00th=[16057], 5.00th=[18482], 10.00th=[19006], 20.00th=[19792], 00:16:58.157 | 30.00th=[23462], 40.00th=[25035], 50.00th=[27132], 60.00th=[30278], 00:16:58.157 | 70.00th=[34866], 80.00th=[41681], 90.00th=[64750], 95.00th=[74974], 00:16:58.157 | 99.00th=[84411], 99.50th=[84411], 99.90th=[86508], 99.95th=[89654], 00:16:58.157 | 99.99th=[89654] 00:16:58.157 bw ( KiB/s): min= 6688, max= 9688, per=15.44%, avg=8188.00, stdev=2121.32, samples=2 00:16:58.157 iops : min= 1672, max= 2422, avg=2047.00, stdev=530.33, samples=2 00:16:58.157 lat (msec) : 2=0.03%, 20=13.72%, 50=75.34%, 100=10.91% 00:16:58.157 cpu : usr=2.28%, sys=6.06%, ctx=514, majf=0, minf=10 00:16:58.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:58.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.157 issued rwts: total=1663,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.157 00:16:58.157 Run status group 0 (all jobs): 00:16:58.157 READ: bw=46.6MiB/s (48.8MB/s), 6107KiB/s-19.9MiB/s (6254kB/s-20.9MB/s), io=47.0MiB (49.2MB), run=1004-1008msec 00:16:58.157 WRITE: bw=51.8MiB/s (54.3MB/s), 7797KiB/s-20.4MiB/s (7984kB/s-21.4MB/s), io=52.2MiB (54.7MB), run=1004-1008msec 00:16:58.157 00:16:58.157 Disk stats (read/write): 00:16:58.157 nvme0n1: ios=1461/1536, merge=0/0, ticks=16561/17691, in_queue=34252, util=87.47% 00:16:58.157 nvme0n2: ios=3441/3584, merge=0/0, ticks=28352/25401, in_queue=53753, util=89.89% 00:16:58.157 nvme0n3: ios=4157/4608, merge=0/0, ticks=25428/22890, in_queue=48318, util=89.03% 00:16:58.157 nvme0n4: ios=1536/1667, merge=0/0, ticks=18198/16261, in_queue=34459, util=89.69% 00:16:58.157 14:36:06 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:58.157 [global] 00:16:58.157 thread=1 00:16:58.157 invalidate=1 00:16:58.157 rw=randwrite 00:16:58.157 time_based=1 00:16:58.157 runtime=1 00:16:58.157 ioengine=libaio 00:16:58.157 direct=1 00:16:58.157 bs=4096 00:16:58.157 iodepth=128 00:16:58.157 norandommap=0 00:16:58.157 numjobs=1 00:16:58.157 00:16:58.157 verify_dump=1 00:16:58.157 verify_backlog=512 00:16:58.157 verify_state_save=0 00:16:58.157 do_verify=1 00:16:58.157 verify=crc32c-intel 00:16:58.157 [job0] 00:16:58.157 filename=/dev/nvme0n1 00:16:58.157 [job1] 00:16:58.157 filename=/dev/nvme0n2 00:16:58.157 [job2] 00:16:58.157 filename=/dev/nvme0n3 00:16:58.157 [job3] 00:16:58.157 filename=/dev/nvme0n4 00:16:58.157 Could not set queue depth (nvme0n1) 00:16:58.157 Could not set queue depth (nvme0n2) 00:16:58.157 Could not set queue depth (nvme0n3) 00:16:58.157 Could not set queue depth (nvme0n4) 00:16:58.157 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.157 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.157 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.157 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.157 fio-3.35 00:16:58.157 Starting 4 threads 00:16:59.532 00:16:59.532 job0: (groupid=0, jobs=1): err= 0: pid=68577: Wed Apr 17 14:36:07 2024 00:16:59.532 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:16:59.532 slat (usec): min=6, max=12958, avg=177.88, stdev=1026.51 00:16:59.532 clat (usec): min=11057, max=49821, avg=23475.28, stdev=9685.48 00:16:59.532 lat (usec): min=13099, max=49843, avg=23653.17, stdev=9703.83 00:16:59.532 clat percentiles (usec): 00:16:59.532 | 1.00th=[12387], 5.00th=[14353], 10.00th=[15270], 20.00th=[15401], 00:16:59.532 | 30.00th=[15533], 40.00th=[18220], 50.00th=[21365], 60.00th=[24249], 00:16:59.532 | 70.00th=[25297], 80.00th=[27395], 90.00th=[41157], 95.00th=[45876], 00:16:59.532 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:16:59.532 | 99.99th=[50070] 00:16:59.532 write: IOPS=3181, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1006msec); 0 zone resets 00:16:59.532 slat (usec): min=11, max=12557, avg=133.29, stdev=714.44 00:16:59.532 clat (usec): min=4651, max=37942, avg=16939.65, stdev=6758.21 00:16:59.532 lat (usec): min=6422, max=37969, avg=17072.94, stdev=6774.48 00:16:59.532 clat percentiles (usec): 00:16:59.532 | 1.00th=[ 7308], 5.00th=[11863], 10.00th=[12125], 20.00th=[12256], 00:16:59.532 | 30.00th=[12256], 40.00th=[12649], 50.00th=[15926], 60.00th=[16581], 00:16:59.532 | 70.00th=[16909], 80.00th=[19530], 90.00th=[24773], 95.00th=[34866], 00:16:59.532 | 99.00th=[36439], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:16:59.532 | 99.99th=[38011] 00:16:59.532 bw ( KiB/s): min=12304, max=12312, per=24.45%, avg=12308.00, stdev= 5.66, samples=2 00:16:59.532 iops : min= 3076, max= 3078, avg=3077.00, stdev= 1.41, samples=2 00:16:59.532 lat (msec) : 10=1.21%, 20=61.25%, 50=37.54% 00:16:59.532 cpu : usr=2.49%, sys=10.05%, ctx=197, majf=0, minf=9 00:16:59.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:59.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.532 issued rwts: total=3072,3201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.532 job1: (groupid=0, jobs=1): err= 0: pid=68578: Wed Apr 17 14:36:07 2024 00:16:59.532 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:16:59.532 slat (usec): min=8, max=8285, avg=171.89, stdev=590.84 00:16:59.532 clat (usec): min=10440, max=31758, avg=21747.45, stdev=3345.09 00:16:59.532 lat (usec): min=10470, max=31780, avg=21919.33, stdev=3365.06 00:16:59.532 clat percentiles (usec): 00:16:59.532 | 1.00th=[11076], 5.00th=[16057], 10.00th=[17957], 20.00th=[19268], 00:16:59.532 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[22414], 00:16:59.532 | 70.00th=[23200], 80.00th=[24511], 90.00th=[26084], 95.00th=[26870], 00:16:59.532 | 99.00th=[28181], 99.50th=[28705], 99.90th=[30016], 99.95th=[31065], 00:16:59.532 | 99.99th=[31851] 00:16:59.532 write: IOPS=3303, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1004msec); 0 zone resets 00:16:59.532 slat (usec): min=5, max=4981, avg=134.61, stdev=455.98 00:16:59.532 clat (usec): min=3550, max=29898, avg=18110.40, stdev=4306.73 00:16:59.532 lat (usec): min=5120, max=29918, avg=18245.01, stdev=4329.89 00:16:59.532 clat percentiles (usec): 00:16:59.532 | 1.00th=[ 9896], 5.00th=[11731], 10.00th=[12125], 20.00th=[13435], 00:16:59.532 | 30.00th=[15270], 40.00th=[16909], 50.00th=[18482], 60.00th=[19792], 00:16:59.532 | 70.00th=[21365], 80.00th=[22152], 90.00th=[23462], 95.00th=[24511], 00:16:59.532 | 99.00th=[26346], 99.50th=[27395], 99.90th=[27395], 99.95th=[29492], 00:16:59.532 | 99.99th=[30016] 00:16:59.532 bw ( KiB/s): min=11608, max=13912, per=25.34%, avg=12760.00, stdev=1629.17, samples=2 00:16:59.532 iops : min= 2902, max= 3478, avg=3190.00, stdev=407.29, samples=2 00:16:59.532 lat (msec) : 4=0.02%, 10=0.56%, 20=42.85%, 50=56.57% 00:16:59.532 cpu : usr=2.39%, sys=9.47%, ctx=1030, majf=0, minf=13 00:16:59.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:59.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.532 issued rwts: total=3072,3317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.532 job2: (groupid=0, jobs=1): err= 0: pid=68579: Wed Apr 17 14:36:07 2024 00:16:59.532 read: IOPS=2860, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1005msec) 00:16:59.532 slat (usec): min=5, max=8997, avg=164.74, stdev=788.78 00:16:59.532 clat (usec): min=771, max=39648, avg=20319.57, stdev=4322.13 00:16:59.532 lat (usec): min=4095, max=39671, avg=20484.31, stdev=4353.42 00:16:59.532 clat percentiles (usec): 00:16:59.532 | 1.00th=[ 4752], 5.00th=[15139], 10.00th=[16909], 20.00th=[18220], 00:16:59.532 | 30.00th=[18482], 40.00th=[18482], 50.00th=[18744], 60.00th=[20317], 00:16:59.532 | 70.00th=[22676], 80.00th=[24773], 90.00th=[25297], 95.00th=[26346], 00:16:59.532 | 99.00th=[31851], 99.50th=[33817], 99.90th=[39584], 99.95th=[39584], 00:16:59.532 | 99.99th=[39584] 00:16:59.532 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:16:59.532 slat (usec): min=11, max=12716, avg=163.29, stdev=750.19 00:16:59.532 clat (usec): min=8749, max=54152, avg=22226.03, stdev=10917.84 00:16:59.532 lat (usec): min=8803, max=54218, avg=22389.32, stdev=11001.00 00:16:59.532 clat percentiles (usec): 00:16:59.532 | 1.00th=[10945], 5.00th=[12256], 10.00th=[12518], 20.00th=[12911], 00:16:59.532 | 30.00th=[13042], 40.00th=[13435], 50.00th=[18482], 60.00th=[21627], 00:16:59.532 | 70.00th=[28443], 80.00th=[32637], 90.00th=[40109], 95.00th=[44303], 00:16:59.532 | 99.00th=[49021], 99.50th=[49546], 99.90th=[54264], 99.95th=[54264], 00:16:59.532 | 99.99th=[54264] 00:16:59.532 bw ( KiB/s): min=11336, max=13240, per=24.41%, avg=12288.00, stdev=1346.33, samples=2 00:16:59.532 iops : min= 2834, max= 3310, avg=3072.00, stdev=336.58, samples=2 00:16:59.532 lat (usec) : 1000=0.02% 00:16:59.532 lat (msec) : 10=1.16%, 20=56.05%, 50=42.53%, 100=0.25% 00:16:59.532 cpu : usr=3.09%, sys=9.56%, ctx=277, majf=0, minf=12 00:16:59.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:59.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.533 issued rwts: total=2875,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.533 job3: (groupid=0, jobs=1): err= 0: pid=68580: Wed Apr 17 14:36:07 2024 00:16:59.533 read: IOPS=3006, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1004msec) 00:16:59.533 slat (usec): min=3, max=8692, avg=179.05, stdev=709.70 00:16:59.533 clat (usec): min=2918, max=33456, avg=21831.80, stdev=3162.44 00:16:59.533 lat (usec): min=3415, max=33470, avg=22010.85, stdev=3164.14 00:16:59.533 clat percentiles (usec): 00:16:59.533 | 1.00th=[12125], 5.00th=[17171], 10.00th=[19006], 20.00th=[20317], 00:16:59.533 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21627], 60.00th=[22152], 00:16:59.533 | 70.00th=[22938], 80.00th=[23987], 90.00th=[25560], 95.00th=[26608], 00:16:59.533 | 99.00th=[27919], 99.50th=[29230], 99.90th=[33424], 99.95th=[33424], 00:16:59.533 | 99.99th=[33424] 00:16:59.533 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:16:59.533 slat (usec): min=11, max=5097, avg=141.80, stdev=407.77 00:16:59.533 clat (usec): min=10208, max=28613, avg=19802.58, stdev=3557.08 00:16:59.533 lat (usec): min=10479, max=28632, avg=19944.38, stdev=3569.40 00:16:59.533 clat percentiles (usec): 00:16:59.533 | 1.00th=[12387], 5.00th=[12780], 10.00th=[13042], 20.00th=[17171], 00:16:59.533 | 30.00th=[19006], 40.00th=[20055], 50.00th=[20841], 60.00th=[21627], 00:16:59.533 | 70.00th=[21890], 80.00th=[22152], 90.00th=[23200], 95.00th=[24511], 00:16:59.533 | 99.00th=[26084], 99.50th=[26346], 99.90th=[27132], 99.95th=[27395], 00:16:59.533 | 99.99th=[28705] 00:16:59.533 bw ( KiB/s): min=12288, max=12312, per=24.43%, avg=12300.00, stdev=16.97, samples=2 00:16:59.533 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:16:59.533 lat (msec) : 4=0.30%, 10=0.20%, 20=27.80%, 50=71.71% 00:16:59.533 cpu : usr=1.99%, sys=9.67%, ctx=949, majf=0, minf=15 00:16:59.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:59.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.533 issued rwts: total=3019,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.533 00:16:59.533 Run status group 0 (all jobs): 00:16:59.533 READ: bw=46.7MiB/s (49.0MB/s), 11.2MiB/s-12.0MiB/s (11.7MB/s-12.5MB/s), io=47.0MiB (49.3MB), run=1004-1006msec 00:16:59.533 WRITE: bw=49.2MiB/s (51.6MB/s), 11.9MiB/s-12.9MiB/s (12.5MB/s-13.5MB/s), io=49.5MiB (51.9MB), run=1004-1006msec 00:16:59.533 00:16:59.533 Disk stats (read/write): 00:16:59.533 nvme0n1: ios=2546/2560, merge=0/0, ticks=15105/9935, in_queue=25040, util=88.78% 00:16:59.533 nvme0n2: ios=2609/2953, merge=0/0, ticks=18173/15658, in_queue=33831, util=89.83% 00:16:59.533 nvme0n3: ios=2581/2719, merge=0/0, ticks=26011/24041, in_queue=50052, util=89.53% 00:16:59.533 nvme0n4: ios=2560/2668, merge=0/0, ticks=18144/15244, in_queue=33388, util=89.68% 00:16:59.533 14:36:07 -- target/fio.sh@55 -- # sync 00:16:59.533 14:36:07 -- target/fio.sh@59 -- # fio_pid=68594 00:16:59.533 14:36:07 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:59.533 14:36:07 -- target/fio.sh@61 -- # sleep 3 00:16:59.533 [global] 00:16:59.533 thread=1 00:16:59.533 invalidate=1 00:16:59.533 rw=read 00:16:59.533 time_based=1 00:16:59.533 runtime=10 00:16:59.533 ioengine=libaio 00:16:59.533 direct=1 00:16:59.533 bs=4096 00:16:59.533 iodepth=1 00:16:59.533 norandommap=1 00:16:59.533 numjobs=1 00:16:59.533 00:16:59.533 [job0] 00:16:59.533 filename=/dev/nvme0n1 00:16:59.533 [job1] 00:16:59.533 filename=/dev/nvme0n2 00:16:59.533 [job2] 00:16:59.533 filename=/dev/nvme0n3 00:16:59.533 [job3] 00:16:59.533 filename=/dev/nvme0n4 00:16:59.533 Could not set queue depth (nvme0n1) 00:16:59.533 Could not set queue depth (nvme0n2) 00:16:59.533 Could not set queue depth (nvme0n3) 00:16:59.533 Could not set queue depth (nvme0n4) 00:16:59.533 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.533 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.533 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.533 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.533 fio-3.35 00:16:59.533 Starting 4 threads 00:17:02.830 14:36:10 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:02.830 fio: pid=68641, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:02.830 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39854080, buflen=4096 00:17:02.830 14:36:11 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:03.087 fio: pid=68640, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:03.087 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=45871104, buflen=4096 00:17:03.087 14:36:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.087 14:36:11 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:03.087 fio: pid=68638, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:03.087 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=7143424, buflen=4096 00:17:03.346 14:36:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.346 14:36:11 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:03.346 fio: pid=68639, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:03.346 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10760192, buflen=4096 00:17:03.346 14:36:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.346 14:36:11 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:03.346 00:17:03.346 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68638: Wed Apr 17 14:36:11 2024 00:17:03.346 read: IOPS=5178, BW=20.2MiB/s (21.2MB/s)(70.8MiB/3501msec) 00:17:03.346 slat (usec): min=8, max=9945, avg=16.82, stdev=130.51 00:17:03.347 clat (usec): min=135, max=3065, avg=174.66, stdev=44.68 00:17:03.347 lat (usec): min=149, max=10135, avg=191.48, stdev=139.37 00:17:03.347 clat percentiles (usec): 00:17:03.347 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:17:03.347 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:17:03.347 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 212], 95.00th=[ 243], 00:17:03.347 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 429], 99.95th=[ 570], 00:17:03.347 | 99.99th=[ 1876] 00:17:03.347 bw ( KiB/s): min=20632, max=22760, per=35.20%, avg=21854.67, stdev=687.15, samples=6 00:17:03.347 iops : min= 5158, max= 5690, avg=5463.67, stdev=171.79, samples=6 00:17:03.347 lat (usec) : 250=95.86%, 500=4.05%, 750=0.04%, 1000=0.02% 00:17:03.347 lat (msec) : 2=0.02%, 4=0.01% 00:17:03.347 cpu : usr=2.06%, sys=6.74%, ctx=18147, majf=0, minf=1 00:17:03.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.347 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.347 issued rwts: total=18129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.347 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68639: Wed Apr 17 14:36:11 2024 00:17:03.347 read: IOPS=5082, BW=19.9MiB/s (20.8MB/s)(74.3MiB/3741msec) 00:17:03.347 slat (usec): min=8, max=16663, avg=19.48, stdev=181.36 00:17:03.347 clat (usec): min=125, max=3702, avg=175.54, stdev=52.63 00:17:03.347 lat (usec): min=143, max=17024, avg=195.02, stdev=191.17 00:17:03.347 clat percentiles (usec): 00:17:03.347 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:17:03.347 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:17:03.347 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 217], 95.00th=[ 241], 00:17:03.347 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 506], 99.95th=[ 742], 00:17:03.347 | 99.99th=[ 3064] 00:17:03.347 bw ( KiB/s): min=13918, max=22480, per=33.06%, avg=20528.86, stdev=3018.29, samples=7 00:17:03.347 iops : min= 3479, max= 5620, avg=5132.14, stdev=754.75, samples=7 00:17:03.347 lat (usec) : 250=96.00%, 500=3.89%, 750=0.06%, 1000=0.03% 00:17:03.347 lat (msec) : 2=0.01%, 4=0.01% 00:17:03.347 cpu : usr=1.93%, sys=7.51%, ctx=19041, majf=0, minf=1 00:17:03.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.347 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.347 issued rwts: total=19012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.347 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68640: Wed Apr 17 14:36:11 2024 00:17:03.347 read: IOPS=3420, BW=13.4MiB/s (14.0MB/s)(43.7MiB/3274msec) 00:17:03.347 slat (usec): min=8, max=16765, avg=18.43, stdev=170.73 00:17:03.347 clat (usec): min=152, max=2637, avg=272.36, stdev=66.88 00:17:03.347 lat (usec): min=165, max=17008, avg=290.79, stdev=183.18 00:17:03.347 clat percentiles (usec): 00:17:03.347 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 188], 20.00th=[ 260], 00:17:03.347 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:17:03.347 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 326], 00:17:03.347 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 930], 99.95th=[ 1565], 00:17:03.347 | 99.99th=[ 2278] 00:17:03.347 bw ( KiB/s): min=12928, max=13640, per=21.51%, avg=13356.00, stdev=258.72, samples=6 00:17:03.347 iops : min= 3232, max= 3410, avg=3339.00, stdev=64.68, samples=6 00:17:03.347 lat (usec) : 250=14.26%, 500=85.34%, 750=0.27%, 1000=0.04% 00:17:03.347 lat (msec) : 2=0.04%, 4=0.04% 00:17:03.347 cpu : usr=1.10%, sys=5.07%, ctx=11205, majf=0, minf=1 00:17:03.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.347 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.347 issued rwts: total=11200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.347 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68641: Wed Apr 17 14:36:11 2024 00:17:03.347 read: IOPS=3266, BW=12.8MiB/s (13.4MB/s)(38.0MiB/2979msec) 00:17:03.347 slat (nsec): min=8795, max=84938, avg=15851.13, stdev=5460.84 00:17:03.347 clat (usec): min=174, max=8183, avg=288.63, stdev=130.28 00:17:03.347 lat (usec): min=190, max=8207, avg=304.49, stdev=131.01 00:17:03.347 clat percentiles (usec): 00:17:03.347 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:17:03.347 | 30.00th=[ 273], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:17:03.347 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 338], 00:17:03.347 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 988], 99.95th=[ 3818], 00:17:03.347 | 99.99th=[ 8160] 00:17:03.347 bw ( KiB/s): min=12232, max=13640, per=21.16%, avg=13139.20, stdev=603.29, samples=5 00:17:03.347 iops : min= 3058, max= 3410, avg=3284.80, stdev=150.82, samples=5 00:17:03.347 lat (usec) : 250=1.64%, 500=98.02%, 750=0.20%, 1000=0.04% 00:17:03.347 lat (msec) : 2=0.03%, 4=0.02%, 10=0.04% 00:17:03.347 cpu : usr=1.11%, sys=4.63%, ctx=9732, majf=0, minf=1 00:17:03.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.347 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.347 issued rwts: total=9731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.347 00:17:03.347 Run status group 0 (all jobs): 00:17:03.347 READ: bw=60.6MiB/s (63.6MB/s), 12.8MiB/s-20.2MiB/s (13.4MB/s-21.2MB/s), io=227MiB (238MB), run=2979-3741msec 00:17:03.347 00:17:03.347 Disk stats (read/write): 00:17:03.347 nvme0n1: ios=17715/0, merge=0/0, ticks=3042/0, in_queue=3042, util=95.42% 00:17:03.347 nvme0n2: ios=18349/0, merge=0/0, ticks=3242/0, in_queue=3242, util=95.34% 00:17:03.347 nvme0n3: ios=10425/0, merge=0/0, ticks=2851/0, in_queue=2851, util=96.09% 00:17:03.347 nvme0n4: ios=9348/0, merge=0/0, ticks=2657/0, in_queue=2657, util=96.49% 00:17:03.606 14:36:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.606 14:36:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:03.864 14:36:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.864 14:36:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:04.122 14:36:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.122 14:36:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:04.380 14:36:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.380 14:36:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:04.991 14:36:13 -- target/fio.sh@69 -- # fio_status=0 00:17:04.991 14:36:13 -- target/fio.sh@70 -- # wait 68594 00:17:04.991 14:36:13 -- target/fio.sh@70 -- # fio_status=4 00:17:04.991 14:36:13 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.991 14:36:13 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:04.991 14:36:13 -- common/autotest_common.sh@1205 -- # local i=0 00:17:04.991 14:36:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:04.991 14:36:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.991 14:36:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:04.991 14:36:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.991 14:36:13 -- common/autotest_common.sh@1217 -- # return 0 00:17:04.991 14:36:13 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:04.991 nvmf hotplug test: fio failed as expected 00:17:04.991 14:36:13 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:04.991 14:36:13 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.991 14:36:13 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:04.991 14:36:13 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:04.991 14:36:13 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:04.991 14:36:13 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:04.991 14:36:13 -- target/fio.sh@91 -- # nvmftestfini 00:17:04.991 14:36:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:04.991 14:36:13 -- nvmf/common.sh@117 -- # sync 00:17:05.250 14:36:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.250 14:36:13 -- nvmf/common.sh@120 -- # set +e 00:17:05.250 14:36:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.250 14:36:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.250 rmmod nvme_tcp 00:17:05.250 rmmod nvme_fabrics 00:17:05.250 rmmod nvme_keyring 00:17:05.250 14:36:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.250 14:36:13 -- nvmf/common.sh@124 -- # set -e 00:17:05.250 14:36:13 -- nvmf/common.sh@125 -- # return 0 00:17:05.250 14:36:13 -- nvmf/common.sh@478 -- # '[' -n 68212 ']' 00:17:05.250 14:36:13 -- nvmf/common.sh@479 -- # killprocess 68212 00:17:05.250 14:36:13 -- common/autotest_common.sh@936 -- # '[' -z 68212 ']' 00:17:05.250 14:36:13 -- common/autotest_common.sh@940 -- # kill -0 68212 00:17:05.250 14:36:13 -- common/autotest_common.sh@941 -- # uname 00:17:05.250 14:36:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.250 14:36:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68212 00:17:05.250 killing process with pid 68212 00:17:05.250 14:36:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:05.250 14:36:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:05.250 14:36:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68212' 00:17:05.250 14:36:13 -- common/autotest_common.sh@955 -- # kill 68212 00:17:05.250 14:36:13 -- common/autotest_common.sh@960 -- # wait 68212 00:17:05.509 14:36:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:05.509 14:36:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:05.509 14:36:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:05.509 14:36:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.509 14:36:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.509 14:36:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.509 14:36:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.509 14:36:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.509 14:36:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:05.509 ************************************ 00:17:05.509 END TEST nvmf_fio_target 00:17:05.509 ************************************ 00:17:05.509 00:17:05.509 real 0m19.467s 00:17:05.509 user 1m13.674s 00:17:05.509 sys 0m10.279s 00:17:05.509 14:36:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:05.509 14:36:13 -- common/autotest_common.sh@10 -- # set +x 00:17:05.509 14:36:13 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:05.509 14:36:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:05.509 14:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.509 14:36:13 -- common/autotest_common.sh@10 -- # set +x 00:17:05.509 ************************************ 00:17:05.509 START TEST nvmf_bdevio 00:17:05.509 ************************************ 00:17:05.509 14:36:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:05.509 * Looking for test storage... 00:17:05.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:05.509 14:36:14 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.509 14:36:14 -- nvmf/common.sh@7 -- # uname -s 00:17:05.509 14:36:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.509 14:36:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.509 14:36:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.509 14:36:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.509 14:36:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.509 14:36:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.509 14:36:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.509 14:36:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.509 14:36:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.509 14:36:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.509 14:36:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:17:05.509 14:36:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:17:05.509 14:36:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.509 14:36:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.509 14:36:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.509 14:36:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.509 14:36:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.509 14:36:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.510 14:36:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.510 14:36:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.510 14:36:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.510 14:36:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.510 14:36:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.510 14:36:14 -- paths/export.sh@5 -- # export PATH 00:17:05.510 14:36:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.510 14:36:14 -- nvmf/common.sh@47 -- # : 0 00:17:05.510 14:36:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.510 14:36:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.510 14:36:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.510 14:36:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.510 14:36:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.510 14:36:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.510 14:36:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.510 14:36:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.510 14:36:14 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.510 14:36:14 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.510 14:36:14 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:05.510 14:36:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:05.510 14:36:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.510 14:36:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:05.510 14:36:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:05.510 14:36:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:05.510 14:36:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.510 14:36:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.510 14:36:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.768 14:36:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:05.768 14:36:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:05.768 14:36:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:05.768 14:36:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:05.768 14:36:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:05.768 14:36:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:05.768 14:36:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.768 14:36:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.768 14:36:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:05.769 14:36:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:05.769 14:36:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.769 14:36:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.769 14:36:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.769 14:36:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.769 14:36:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.769 14:36:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.769 14:36:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.769 14:36:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.769 14:36:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:05.769 14:36:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:05.769 Cannot find device "nvmf_tgt_br" 00:17:05.769 14:36:14 -- nvmf/common.sh@155 -- # true 00:17:05.769 14:36:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.769 Cannot find device "nvmf_tgt_br2" 00:17:05.769 14:36:14 -- nvmf/common.sh@156 -- # true 00:17:05.769 14:36:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:05.769 14:36:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:05.769 Cannot find device "nvmf_tgt_br" 00:17:05.769 14:36:14 -- nvmf/common.sh@158 -- # true 00:17:05.769 14:36:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:05.769 Cannot find device "nvmf_tgt_br2" 00:17:05.769 14:36:14 -- nvmf/common.sh@159 -- # true 00:17:05.769 14:36:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:05.769 14:36:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:05.769 14:36:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.769 14:36:14 -- nvmf/common.sh@162 -- # true 00:17:05.769 14:36:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.769 14:36:14 -- nvmf/common.sh@163 -- # true 00:17:05.769 14:36:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.769 14:36:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.769 14:36:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.769 14:36:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.769 14:36:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.769 14:36:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.769 14:36:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.769 14:36:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:05.769 14:36:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:05.769 14:36:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:05.769 14:36:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:05.769 14:36:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:05.769 14:36:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:05.769 14:36:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.769 14:36:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.769 14:36:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.028 14:36:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:06.028 14:36:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:06.028 14:36:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.028 14:36:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.028 14:36:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.028 14:36:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.028 14:36:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:06.028 14:36:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:06.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:17:06.028 00:17:06.028 --- 10.0.0.2 ping statistics --- 00:17:06.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.028 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:06.028 14:36:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:06.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:06.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:06.028 00:17:06.028 --- 10.0.0.3 ping statistics --- 00:17:06.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.028 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:06.028 14:36:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:06.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:06.028 00:17:06.028 --- 10.0.0.1 ping statistics --- 00:17:06.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.028 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:06.028 14:36:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.028 14:36:14 -- nvmf/common.sh@422 -- # return 0 00:17:06.028 14:36:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:06.028 14:36:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.028 14:36:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:06.028 14:36:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:06.028 14:36:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.028 14:36:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:06.028 14:36:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:06.028 14:36:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:06.028 14:36:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:06.028 14:36:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:06.028 14:36:14 -- common/autotest_common.sh@10 -- # set +x 00:17:06.028 14:36:14 -- nvmf/common.sh@470 -- # nvmfpid=68913 00:17:06.028 14:36:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:06.028 14:36:14 -- nvmf/common.sh@471 -- # waitforlisten 68913 00:17:06.028 14:36:14 -- common/autotest_common.sh@817 -- # '[' -z 68913 ']' 00:17:06.028 14:36:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.028 14:36:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:06.028 14:36:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.028 14:36:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:06.028 14:36:14 -- common/autotest_common.sh@10 -- # set +x 00:17:06.028 [2024-04-17 14:36:14.525438] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:06.028 [2024-04-17 14:36:14.525527] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.287 [2024-04-17 14:36:14.666319] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.287 [2024-04-17 14:36:14.736667] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.287 [2024-04-17 14:36:14.736725] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.287 [2024-04-17 14:36:14.736738] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.287 [2024-04-17 14:36:14.736748] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.287 [2024-04-17 14:36:14.736756] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.287 [2024-04-17 14:36:14.736882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.287 [2024-04-17 14:36:14.737576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:06.287 [2024-04-17 14:36:14.737683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:06.287 [2024-04-17 14:36:14.738073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.222 14:36:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:07.222 14:36:15 -- common/autotest_common.sh@850 -- # return 0 00:17:07.222 14:36:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:07.222 14:36:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:07.222 14:36:15 -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 14:36:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.222 14:36:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.222 14:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.222 14:36:15 -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 [2024-04-17 14:36:15.514497] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.222 14:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.222 14:36:15 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.222 14:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.222 14:36:15 -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 Malloc0 00:17:07.222 14:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.222 14:36:15 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.222 14:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.222 14:36:15 -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 14:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.222 14:36:15 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.222 14:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.222 14:36:15 -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 14:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.222 14:36:15 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.222 14:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.222 14:36:15 -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 [2024-04-17 14:36:15.570565] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.222 14:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.222 14:36:15 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:07.222 14:36:15 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:07.222 14:36:15 -- nvmf/common.sh@521 -- # config=() 00:17:07.222 14:36:15 -- nvmf/common.sh@521 -- # local subsystem config 00:17:07.222 14:36:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.222 14:36:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.222 { 00:17:07.222 "params": { 00:17:07.222 "name": "Nvme$subsystem", 00:17:07.222 "trtype": "$TEST_TRANSPORT", 00:17:07.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.222 "adrfam": "ipv4", 00:17:07.222 "trsvcid": "$NVMF_PORT", 00:17:07.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.222 "hdgst": ${hdgst:-false}, 00:17:07.222 "ddgst": ${ddgst:-false} 00:17:07.222 }, 00:17:07.222 "method": "bdev_nvme_attach_controller" 00:17:07.222 } 00:17:07.222 EOF 00:17:07.222 )") 00:17:07.222 14:36:15 -- nvmf/common.sh@543 -- # cat 00:17:07.222 14:36:15 -- nvmf/common.sh@545 -- # jq . 00:17:07.222 14:36:15 -- nvmf/common.sh@546 -- # IFS=, 00:17:07.222 14:36:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:07.222 "params": { 00:17:07.222 "name": "Nvme1", 00:17:07.222 "trtype": "tcp", 00:17:07.222 "traddr": "10.0.0.2", 00:17:07.222 "adrfam": "ipv4", 00:17:07.222 "trsvcid": "4420", 00:17:07.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.222 "hdgst": false, 00:17:07.222 "ddgst": false 00:17:07.222 }, 00:17:07.222 "method": "bdev_nvme_attach_controller" 00:17:07.222 }' 00:17:07.222 [2024-04-17 14:36:15.632146] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:07.222 [2024-04-17 14:36:15.632243] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68949 ] 00:17:07.222 [2024-04-17 14:36:15.773567] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.481 [2024-04-17 14:36:15.846698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.481 [2024-04-17 14:36:15.846840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.481 [2024-04-17 14:36:15.846847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.481 [2024-04-17 14:36:15.855736] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:07.481 [2024-04-17 14:36:15.855773] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:07.481 [2024-04-17 14:36:15.855785] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:17:07.481 [2024-04-17 14:36:15.991357] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:17:07.481 I/O targets: 00:17:07.481 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:07.481 00:17:07.481 00:17:07.481 CUnit - A unit testing framework for C - Version 2.1-3 00:17:07.481 http://cunit.sourceforge.net/ 00:17:07.481 00:17:07.481 00:17:07.481 Suite: bdevio tests on: Nvme1n1 00:17:07.481 Test: blockdev write read block ...passed 00:17:07.481 Test: blockdev write zeroes read block ...passed 00:17:07.481 Test: blockdev write zeroes read no split ...passed 00:17:07.481 Test: blockdev write zeroes read split ...passed 00:17:07.481 Test: blockdev write zeroes read split partial ...passed 00:17:07.481 Test: blockdev reset ...[2024-04-17 14:36:16.022026] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:07.481 [2024-04-17 14:36:16.022142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9fdb0 (9): Bad file descriptor 00:17:07.481 passed 00:17:07.481 Test: blockdev write read 8 blocks ...[2024-04-17 14:36:16.040874] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:07.481 passed 00:17:07.481 Test: blockdev write read size > 128k ...passed 00:17:07.481 Test: blockdev write read invalid size ...passed 00:17:07.481 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:07.481 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:07.481 Test: blockdev write read max offset ...passed 00:17:07.481 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:07.481 Test: blockdev writev readv 8 blocks ...passed 00:17:07.481 Test: blockdev writev readv 30 x 1block ...passed 00:17:07.481 Test: blockdev writev readv block ...passed 00:17:07.481 Test: blockdev writev readv size > 128k ...passed 00:17:07.481 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:07.481 Test: blockdev comparev and writev ...[2024-04-17 14:36:16.048493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.481 [2024-04-17 14:36:16.048531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.481 [2024-04-17 14:36:16.048551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.481 [2024-04-17 14:36:16.048563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.481 [2024-04-17 14:36:16.049061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.481 [2024-04-17 14:36:16.049085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:07.481 [2024-04-17 14:36:16.049103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.481 [2024-04-17 14:36:16.049113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:07.481 [2024-04-17 14:36:16.049388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.481 [2024-04-17 14:36:16.049405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:07.481 [2024-04-17 14:36:16.049421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.481 [2024-04-17 14:36:16.049432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:07.481 [2024-04-17 14:36:16.049730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.481 [2024-04-17 14:36:16.049751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:07.481 [2024-04-17 14:36:16.049768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.481 [2024-04-17 14:36:16.049779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:07.481 passed 00:17:07.481 Test: blockdev nvme passthru rw ...passed 00:17:07.481 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:36:16.050668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.481 passed 00:17:07.481 Test: blockdev nvme admin passthru ...[2024-04-17 14:36:16.050692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:07.482 [2024-04-17 14:36:16.050803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.482 [2024-04-17 14:36:16.050820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:07.482 [2024-04-17 14:36:16.050924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.482 [2024-04-17 14:36:16.050940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:07.482 [2024-04-17 14:36:16.051070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.482 [2024-04-17 14:36:16.051087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:07.482 passed 00:17:07.482 Test: blockdev copy ...passed 00:17:07.482 00:17:07.482 Run Summary: Type Total Ran Passed Failed Inactive 00:17:07.482 suites 1 1 n/a 0 0 00:17:07.482 tests 23 23 23 0 0 00:17:07.482 asserts 152 152 152 0 n/a 00:17:07.482 00:17:07.482 Elapsed time = 0.146 seconds 00:17:07.740 14:36:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.740 14:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.740 14:36:16 -- common/autotest_common.sh@10 -- # set +x 00:17:07.740 14:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.740 14:36:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:07.740 14:36:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:07.740 14:36:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:07.740 14:36:16 -- nvmf/common.sh@117 -- # sync 00:17:07.740 14:36:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.740 14:36:16 -- nvmf/common.sh@120 -- # set +e 00:17:07.740 14:36:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.740 14:36:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.740 rmmod nvme_tcp 00:17:07.740 rmmod nvme_fabrics 00:17:07.740 rmmod nvme_keyring 00:17:07.740 14:36:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.999 14:36:16 -- nvmf/common.sh@124 -- # set -e 00:17:07.999 14:36:16 -- nvmf/common.sh@125 -- # return 0 00:17:07.999 14:36:16 -- nvmf/common.sh@478 -- # '[' -n 68913 ']' 00:17:07.999 14:36:16 -- nvmf/common.sh@479 -- # killprocess 68913 00:17:07.999 14:36:16 -- common/autotest_common.sh@936 -- # '[' -z 68913 ']' 00:17:07.999 14:36:16 -- common/autotest_common.sh@940 -- # kill -0 68913 00:17:07.999 14:36:16 -- common/autotest_common.sh@941 -- # uname 00:17:07.999 14:36:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.999 14:36:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68913 00:17:07.999 14:36:16 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:07.999 killing process with pid 68913 00:17:07.999 14:36:16 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:07.999 14:36:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68913' 00:17:07.999 14:36:16 -- common/autotest_common.sh@955 -- # kill 68913 00:17:07.999 14:36:16 -- common/autotest_common.sh@960 -- # wait 68913 00:17:07.999 14:36:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:07.999 14:36:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:07.999 14:36:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:07.999 14:36:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.999 14:36:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.999 14:36:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.999 14:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.999 14:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.999 14:36:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:08.275 00:17:08.275 real 0m2.599s 00:17:08.275 user 0m8.400s 00:17:08.275 sys 0m0.643s 00:17:08.275 14:36:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.275 ************************************ 00:17:08.275 END TEST nvmf_bdevio 00:17:08.275 ************************************ 00:17:08.275 14:36:16 -- common/autotest_common.sh@10 -- # set +x 00:17:08.275 14:36:16 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:08.275 14:36:16 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:08.275 14:36:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:08.275 14:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.275 14:36:16 -- common/autotest_common.sh@10 -- # set +x 00:17:08.275 ************************************ 00:17:08.275 START TEST nvmf_bdevio_no_huge 00:17:08.275 ************************************ 00:17:08.275 14:36:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:08.275 * Looking for test storage... 00:17:08.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:08.275 14:36:16 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.275 14:36:16 -- nvmf/common.sh@7 -- # uname -s 00:17:08.275 14:36:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.275 14:36:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.275 14:36:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.275 14:36:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.275 14:36:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.275 14:36:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.275 14:36:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.275 14:36:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.275 14:36:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.275 14:36:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.275 14:36:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:17:08.275 14:36:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:17:08.275 14:36:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.275 14:36:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.275 14:36:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:08.275 14:36:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.275 14:36:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.275 14:36:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.275 14:36:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.275 14:36:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.275 14:36:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.275 14:36:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.275 14:36:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.275 14:36:16 -- paths/export.sh@5 -- # export PATH 00:17:08.275 14:36:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.275 14:36:16 -- nvmf/common.sh@47 -- # : 0 00:17:08.275 14:36:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.275 14:36:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.275 14:36:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.275 14:36:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.275 14:36:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.275 14:36:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.275 14:36:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.275 14:36:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.275 14:36:16 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.275 14:36:16 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.275 14:36:16 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:08.275 14:36:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:08.275 14:36:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.275 14:36:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:08.275 14:36:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:08.275 14:36:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:08.275 14:36:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.275 14:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.275 14:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.275 14:36:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:08.275 14:36:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:08.275 14:36:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:08.275 14:36:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:08.275 14:36:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:08.275 14:36:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:08.275 14:36:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.275 14:36:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.275 14:36:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:08.275 14:36:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:08.276 14:36:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:08.276 14:36:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:08.276 14:36:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:08.276 14:36:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.276 14:36:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:08.276 14:36:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:08.276 14:36:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:08.276 14:36:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:08.276 14:36:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:08.276 14:36:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:08.276 Cannot find device "nvmf_tgt_br" 00:17:08.276 14:36:16 -- nvmf/common.sh@155 -- # true 00:17:08.276 14:36:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:08.276 Cannot find device "nvmf_tgt_br2" 00:17:08.276 14:36:16 -- nvmf/common.sh@156 -- # true 00:17:08.276 14:36:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:08.276 14:36:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:08.276 Cannot find device "nvmf_tgt_br" 00:17:08.276 14:36:16 -- nvmf/common.sh@158 -- # true 00:17:08.276 14:36:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:08.276 Cannot find device "nvmf_tgt_br2" 00:17:08.276 14:36:16 -- nvmf/common.sh@159 -- # true 00:17:08.276 14:36:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:08.535 14:36:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:08.535 14:36:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:08.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.535 14:36:16 -- nvmf/common.sh@162 -- # true 00:17:08.535 14:36:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:08.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.535 14:36:16 -- nvmf/common.sh@163 -- # true 00:17:08.535 14:36:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:08.535 14:36:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:08.535 14:36:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:08.535 14:36:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:08.535 14:36:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:08.535 14:36:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:08.535 14:36:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:08.535 14:36:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:08.535 14:36:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:08.535 14:36:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:08.535 14:36:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:08.535 14:36:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:08.535 14:36:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:08.535 14:36:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:08.535 14:36:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:08.535 14:36:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:08.535 14:36:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:08.535 14:36:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:08.535 14:36:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:08.535 14:36:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:08.535 14:36:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:08.535 14:36:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:08.535 14:36:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:08.535 14:36:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:08.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:08.535 00:17:08.535 --- 10.0.0.2 ping statistics --- 00:17:08.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.535 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:08.535 14:36:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:08.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:08.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:17:08.535 00:17:08.535 --- 10.0.0.3 ping statistics --- 00:17:08.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.535 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:08.535 14:36:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:08.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:08.535 00:17:08.535 --- 10.0.0.1 ping statistics --- 00:17:08.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.535 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:08.535 14:36:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.535 14:36:17 -- nvmf/common.sh@422 -- # return 0 00:17:08.535 14:36:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:08.535 14:36:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.535 14:36:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:08.535 14:36:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:08.535 14:36:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.535 14:36:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:08.535 14:36:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:08.794 14:36:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:08.794 14:36:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:08.794 14:36:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:08.794 14:36:17 -- common/autotest_common.sh@10 -- # set +x 00:17:08.794 14:36:17 -- nvmf/common.sh@470 -- # nvmfpid=69127 00:17:08.794 14:36:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:08.794 14:36:17 -- nvmf/common.sh@471 -- # waitforlisten 69127 00:17:08.794 14:36:17 -- common/autotest_common.sh@817 -- # '[' -z 69127 ']' 00:17:08.794 14:36:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.794 14:36:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:08.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.794 14:36:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.794 14:36:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:08.794 14:36:17 -- common/autotest_common.sh@10 -- # set +x 00:17:08.794 [2024-04-17 14:36:17.190681] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:08.794 [2024-04-17 14:36:17.190784] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:08.794 [2024-04-17 14:36:17.331022] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.053 [2024-04-17 14:36:17.438374] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.053 [2024-04-17 14:36:17.438424] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.053 [2024-04-17 14:36:17.438435] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.053 [2024-04-17 14:36:17.438444] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.053 [2024-04-17 14:36:17.438451] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.053 [2024-04-17 14:36:17.438612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:09.053 [2024-04-17 14:36:17.438663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:09.053 [2024-04-17 14:36:17.438811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:09.053 [2024-04-17 14:36:17.439341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.638 14:36:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:09.638 14:36:18 -- common/autotest_common.sh@850 -- # return 0 00:17:09.638 14:36:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:09.638 14:36:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:09.638 14:36:18 -- common/autotest_common.sh@10 -- # set +x 00:17:09.638 14:36:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.638 14:36:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:09.638 14:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.638 14:36:18 -- common/autotest_common.sh@10 -- # set +x 00:17:09.638 [2024-04-17 14:36:18.183815] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.638 14:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.638 14:36:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:09.638 14:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.638 14:36:18 -- common/autotest_common.sh@10 -- # set +x 00:17:09.638 Malloc0 00:17:09.638 14:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.638 14:36:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:09.638 14:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.638 14:36:18 -- common/autotest_common.sh@10 -- # set +x 00:17:09.638 14:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.638 14:36:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.638 14:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.638 14:36:18 -- common/autotest_common.sh@10 -- # set +x 00:17:09.638 14:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.638 14:36:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.638 14:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.638 14:36:18 -- common/autotest_common.sh@10 -- # set +x 00:17:09.638 [2024-04-17 14:36:18.227985] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.914 14:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.914 14:36:18 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:09.914 14:36:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:09.914 14:36:18 -- nvmf/common.sh@521 -- # config=() 00:17:09.914 14:36:18 -- nvmf/common.sh@521 -- # local subsystem config 00:17:09.914 14:36:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:09.914 14:36:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:09.914 { 00:17:09.914 "params": { 00:17:09.914 "name": "Nvme$subsystem", 00:17:09.914 "trtype": "$TEST_TRANSPORT", 00:17:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.914 "adrfam": "ipv4", 00:17:09.914 "trsvcid": "$NVMF_PORT", 00:17:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.914 "hdgst": ${hdgst:-false}, 00:17:09.914 "ddgst": ${ddgst:-false} 00:17:09.914 }, 00:17:09.914 "method": "bdev_nvme_attach_controller" 00:17:09.914 } 00:17:09.914 EOF 00:17:09.914 )") 00:17:09.914 14:36:18 -- nvmf/common.sh@543 -- # cat 00:17:09.914 14:36:18 -- nvmf/common.sh@545 -- # jq . 00:17:09.914 14:36:18 -- nvmf/common.sh@546 -- # IFS=, 00:17:09.914 14:36:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:09.914 "params": { 00:17:09.914 "name": "Nvme1", 00:17:09.914 "trtype": "tcp", 00:17:09.914 "traddr": "10.0.0.2", 00:17:09.914 "adrfam": "ipv4", 00:17:09.914 "trsvcid": "4420", 00:17:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:09.914 "hdgst": false, 00:17:09.914 "ddgst": false 00:17:09.914 }, 00:17:09.914 "method": "bdev_nvme_attach_controller" 00:17:09.914 }' 00:17:09.914 [2024-04-17 14:36:18.278073] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:09.914 [2024-04-17 14:36:18.278163] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69163 ] 00:17:09.914 [2024-04-17 14:36:18.412367] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.175 [2024-04-17 14:36:18.522403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.175 [2024-04-17 14:36:18.522555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.175 [2024-04-17 14:36:18.522560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.175 [2024-04-17 14:36:18.531482] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:10.175 [2024-04-17 14:36:18.531538] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:10.175 [2024-04-17 14:36:18.531549] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:17:10.175 [2024-04-17 14:36:18.678335] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:17:10.175 I/O targets: 00:17:10.175 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:10.175 00:17:10.175 00:17:10.175 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.175 http://cunit.sourceforge.net/ 00:17:10.175 00:17:10.175 00:17:10.175 Suite: bdevio tests on: Nvme1n1 00:17:10.175 Test: blockdev write read block ...passed 00:17:10.175 Test: blockdev write zeroes read block ...passed 00:17:10.175 Test: blockdev write zeroes read no split ...passed 00:17:10.175 Test: blockdev write zeroes read split ...passed 00:17:10.175 Test: blockdev write zeroes read split partial ...passed 00:17:10.175 Test: blockdev reset ...[2024-04-17 14:36:18.719960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.175 [2024-04-17 14:36:18.720092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17031b0 (9): Bad file descriptor 00:17:10.175 [2024-04-17 14:36:18.737328] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.175 passed 00:17:10.175 Test: blockdev write read 8 blocks ...passed 00:17:10.175 Test: blockdev write read size > 128k ...passed 00:17:10.175 Test: blockdev write read invalid size ...passed 00:17:10.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:10.175 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:10.175 Test: blockdev write read max offset ...passed 00:17:10.175 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:10.175 Test: blockdev writev readv 8 blocks ...passed 00:17:10.175 Test: blockdev writev readv 30 x 1block ...passed 00:17:10.175 Test: blockdev writev readv block ...passed 00:17:10.175 Test: blockdev writev readv size > 128k ...passed 00:17:10.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:10.175 Test: blockdev comparev and writev ...[2024-04-17 14:36:18.746175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.175 [2024-04-17 14:36:18.746227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.175 [2024-04-17 14:36:18.746253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.175 [2024-04-17 14:36:18.746266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:10.175 [2024-04-17 14:36:18.746798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.175 [2024-04-17 14:36:18.746839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:10.175 [2024-04-17 14:36:18.746861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.175 [2024-04-17 14:36:18.746874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:10.175 [2024-04-17 14:36:18.747309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.175 [2024-04-17 14:36:18.747342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:10.175 [2024-04-17 14:36:18.747363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.176 [2024-04-17 14:36:18.747375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:10.176 [2024-04-17 14:36:18.747735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.176 [2024-04-17 14:36:18.747768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:10.176 [2024-04-17 14:36:18.747788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:10.176 [2024-04-17 14:36:18.747800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:10.176 passed 00:17:10.176 Test: blockdev nvme passthru rw ...passed 00:17:10.176 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:36:18.748771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.176 [2024-04-17 14:36:18.748806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:10.176 [2024-04-17 14:36:18.748970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.176 [2024-04-17 14:36:18.749004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:10.176 [2024-04-17 14:36:18.749164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.176 [2024-04-17 14:36:18.749189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:10.176 [2024-04-17 14:36:18.749326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:10.176 [2024-04-17 14:36:18.749377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:10.176 passed 00:17:10.176 Test: blockdev nvme admin passthru ...passed 00:17:10.176 Test: blockdev copy ...passed 00:17:10.176 00:17:10.176 Run Summary: Type Total Ran Passed Failed Inactive 00:17:10.176 suites 1 1 n/a 0 0 00:17:10.176 tests 23 23 23 0 0 00:17:10.176 asserts 152 152 152 0 n/a 00:17:10.176 00:17:10.176 Elapsed time = 0.167 seconds 00:17:10.743 14:36:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.743 14:36:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.743 14:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:10.743 14:36:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.743 14:36:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:10.743 14:36:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:10.743 14:36:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:10.743 14:36:19 -- nvmf/common.sh@117 -- # sync 00:17:10.743 14:36:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:10.743 14:36:19 -- nvmf/common.sh@120 -- # set +e 00:17:10.743 14:36:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.744 14:36:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:10.744 rmmod nvme_tcp 00:17:10.744 rmmod nvme_fabrics 00:17:10.744 rmmod nvme_keyring 00:17:10.744 14:36:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.744 14:36:19 -- nvmf/common.sh@124 -- # set -e 00:17:10.744 14:36:19 -- nvmf/common.sh@125 -- # return 0 00:17:10.744 14:36:19 -- nvmf/common.sh@478 -- # '[' -n 69127 ']' 00:17:10.744 14:36:19 -- nvmf/common.sh@479 -- # killprocess 69127 00:17:10.744 14:36:19 -- common/autotest_common.sh@936 -- # '[' -z 69127 ']' 00:17:10.744 14:36:19 -- common/autotest_common.sh@940 -- # kill -0 69127 00:17:10.744 14:36:19 -- common/autotest_common.sh@941 -- # uname 00:17:10.744 14:36:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.744 14:36:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69127 00:17:10.744 14:36:19 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:10.744 14:36:19 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:10.744 14:36:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69127' 00:17:10.744 killing process with pid 69127 00:17:10.744 14:36:19 -- common/autotest_common.sh@955 -- # kill 69127 00:17:10.744 14:36:19 -- common/autotest_common.sh@960 -- # wait 69127 00:17:11.002 14:36:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:11.002 14:36:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:11.002 14:36:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:11.002 14:36:19 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.002 14:36:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.002 14:36:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.002 14:36:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.002 14:36:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.262 14:36:19 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:11.262 00:17:11.262 real 0m2.912s 00:17:11.262 user 0m9.512s 00:17:11.262 sys 0m1.080s 00:17:11.262 14:36:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:11.262 ************************************ 00:17:11.262 END TEST nvmf_bdevio_no_huge 00:17:11.262 14:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:11.262 ************************************ 00:17:11.262 14:36:19 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:11.262 14:36:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:11.262 14:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.262 14:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:11.262 ************************************ 00:17:11.262 START TEST nvmf_tls 00:17:11.262 ************************************ 00:17:11.262 14:36:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:11.262 * Looking for test storage... 00:17:11.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:11.262 14:36:19 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.262 14:36:19 -- nvmf/common.sh@7 -- # uname -s 00:17:11.262 14:36:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.262 14:36:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.262 14:36:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.262 14:36:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.262 14:36:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.262 14:36:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.262 14:36:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.262 14:36:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.262 14:36:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.262 14:36:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.262 14:36:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:17:11.262 14:36:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:17:11.262 14:36:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.262 14:36:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.262 14:36:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.262 14:36:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.262 14:36:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.262 14:36:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.262 14:36:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.262 14:36:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.262 14:36:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.262 14:36:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.262 14:36:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.262 14:36:19 -- paths/export.sh@5 -- # export PATH 00:17:11.262 14:36:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.262 14:36:19 -- nvmf/common.sh@47 -- # : 0 00:17:11.262 14:36:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.262 14:36:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.262 14:36:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.262 14:36:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.262 14:36:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.262 14:36:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.262 14:36:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.262 14:36:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.262 14:36:19 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.262 14:36:19 -- target/tls.sh@62 -- # nvmftestinit 00:17:11.262 14:36:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:11.262 14:36:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.262 14:36:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:11.262 14:36:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:11.262 14:36:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:11.262 14:36:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.262 14:36:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.262 14:36:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.262 14:36:19 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:11.262 14:36:19 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:11.262 14:36:19 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:11.262 14:36:19 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:11.262 14:36:19 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:11.262 14:36:19 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:11.262 14:36:19 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.262 14:36:19 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.262 14:36:19 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:11.262 14:36:19 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:11.262 14:36:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.262 14:36:19 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.262 14:36:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.262 14:36:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.262 14:36:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.262 14:36:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.262 14:36:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.262 14:36:19 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.262 14:36:19 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:11.521 14:36:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:11.521 Cannot find device "nvmf_tgt_br" 00:17:11.521 14:36:19 -- nvmf/common.sh@155 -- # true 00:17:11.521 14:36:19 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.521 Cannot find device "nvmf_tgt_br2" 00:17:11.521 14:36:19 -- nvmf/common.sh@156 -- # true 00:17:11.521 14:36:19 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:11.521 14:36:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:11.521 Cannot find device "nvmf_tgt_br" 00:17:11.521 14:36:19 -- nvmf/common.sh@158 -- # true 00:17:11.521 14:36:19 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:11.521 Cannot find device "nvmf_tgt_br2" 00:17:11.521 14:36:19 -- nvmf/common.sh@159 -- # true 00:17:11.521 14:36:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:11.521 14:36:19 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:11.521 14:36:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.521 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.521 14:36:19 -- nvmf/common.sh@162 -- # true 00:17:11.521 14:36:19 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.521 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.521 14:36:19 -- nvmf/common.sh@163 -- # true 00:17:11.521 14:36:19 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:11.521 14:36:19 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:11.521 14:36:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:11.521 14:36:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:11.521 14:36:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:11.521 14:36:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:11.521 14:36:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:11.521 14:36:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:11.521 14:36:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:11.521 14:36:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:11.521 14:36:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:11.521 14:36:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:11.521 14:36:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:11.521 14:36:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:11.521 14:36:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:11.521 14:36:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:11.522 14:36:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:11.522 14:36:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:11.522 14:36:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:11.780 14:36:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:11.780 14:36:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.780 14:36:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.780 14:36:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.780 14:36:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:11.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:11.780 00:17:11.780 --- 10.0.0.2 ping statistics --- 00:17:11.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.780 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:11.780 14:36:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:11.781 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.781 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:11.781 00:17:11.781 --- 10.0.0.3 ping statistics --- 00:17:11.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.781 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:11.781 14:36:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:11.781 00:17:11.781 --- 10.0.0.1 ping statistics --- 00:17:11.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.781 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:11.781 14:36:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.781 14:36:20 -- nvmf/common.sh@422 -- # return 0 00:17:11.781 14:36:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:11.781 14:36:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.781 14:36:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:11.781 14:36:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:11.781 14:36:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.781 14:36:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:11.781 14:36:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:11.781 14:36:20 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:11.781 14:36:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:11.781 14:36:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:11.781 14:36:20 -- common/autotest_common.sh@10 -- # set +x 00:17:11.781 14:36:20 -- nvmf/common.sh@470 -- # nvmfpid=69348 00:17:11.781 14:36:20 -- nvmf/common.sh@471 -- # waitforlisten 69348 00:17:11.781 14:36:20 -- common/autotest_common.sh@817 -- # '[' -z 69348 ']' 00:17:11.781 14:36:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:11.781 14:36:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.781 14:36:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:11.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.781 14:36:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.781 14:36:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:11.781 14:36:20 -- common/autotest_common.sh@10 -- # set +x 00:17:11.781 [2024-04-17 14:36:20.263201] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:11.781 [2024-04-17 14:36:20.263291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.040 [2024-04-17 14:36:20.400527] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.040 [2024-04-17 14:36:20.456472] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.040 [2024-04-17 14:36:20.456525] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.040 [2024-04-17 14:36:20.456537] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.040 [2024-04-17 14:36:20.456545] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.040 [2024-04-17 14:36:20.456553] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.040 [2024-04-17 14:36:20.456577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.608 14:36:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:12.608 14:36:21 -- common/autotest_common.sh@850 -- # return 0 00:17:12.608 14:36:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:12.608 14:36:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:12.608 14:36:21 -- common/autotest_common.sh@10 -- # set +x 00:17:12.866 14:36:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.866 14:36:21 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:12.866 14:36:21 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:13.124 true 00:17:13.124 14:36:21 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:13.124 14:36:21 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.383 14:36:21 -- target/tls.sh@73 -- # version=0 00:17:13.383 14:36:21 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:13.383 14:36:21 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:13.642 14:36:22 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.642 14:36:22 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:13.900 14:36:22 -- target/tls.sh@81 -- # version=13 00:17:13.900 14:36:22 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:13.900 14:36:22 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:14.159 14:36:22 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.159 14:36:22 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:14.418 14:36:22 -- target/tls.sh@89 -- # version=7 00:17:14.418 14:36:22 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:14.418 14:36:22 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.418 14:36:22 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:14.676 14:36:23 -- target/tls.sh@96 -- # ktls=false 00:17:14.676 14:36:23 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:14.676 14:36:23 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:14.934 14:36:23 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:14.934 14:36:23 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.191 14:36:23 -- target/tls.sh@104 -- # ktls=true 00:17:15.191 14:36:23 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:15.191 14:36:23 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:15.450 14:36:23 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:15.450 14:36:23 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.708 14:36:24 -- target/tls.sh@112 -- # ktls=false 00:17:15.708 14:36:24 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:15.708 14:36:24 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:15.708 14:36:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:15.708 14:36:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:15.708 14:36:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:15.708 14:36:24 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:15.708 14:36:24 -- nvmf/common.sh@693 -- # digest=1 00:17:15.708 14:36:24 -- nvmf/common.sh@694 -- # python - 00:17:15.708 14:36:24 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.708 14:36:24 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:15.708 14:36:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:15.708 14:36:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:15.708 14:36:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:15.708 14:36:24 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:15.708 14:36:24 -- nvmf/common.sh@693 -- # digest=1 00:17:15.708 14:36:24 -- nvmf/common.sh@694 -- # python - 00:17:15.708 14:36:24 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.708 14:36:24 -- target/tls.sh@121 -- # mktemp 00:17:15.708 14:36:24 -- target/tls.sh@121 -- # key_path=/tmp/tmp.O5T78NB3Uf 00:17:15.708 14:36:24 -- target/tls.sh@122 -- # mktemp 00:17:15.708 14:36:24 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.tnwllK4shI 00:17:15.708 14:36:24 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:15.708 14:36:24 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:15.708 14:36:24 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.O5T78NB3Uf 00:17:15.708 14:36:24 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tnwllK4shI 00:17:15.708 14:36:24 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:15.966 14:36:24 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:16.224 14:36:24 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.O5T78NB3Uf 00:17:16.224 14:36:24 -- target/tls.sh@49 -- # local key=/tmp/tmp.O5T78NB3Uf 00:17:16.224 14:36:24 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.482 [2024-04-17 14:36:24.935959] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.482 14:36:24 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.741 14:36:25 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:17.000 [2024-04-17 14:36:25.432070] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.000 [2024-04-17 14:36:25.432291] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.000 14:36:25 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:17.258 malloc0 00:17:17.258 14:36:25 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.518 14:36:25 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O5T78NB3Uf 00:17:17.777 [2024-04-17 14:36:26.170498] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:17.777 14:36:26 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.O5T78NB3Uf 00:17:30.030 Initializing NVMe Controllers 00:17:30.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:30.030 Initialization complete. Launching workers. 00:17:30.030 ======================================================== 00:17:30.030 Latency(us) 00:17:30.030 Device Information : IOPS MiB/s Average min max 00:17:30.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9110.24 35.59 7026.32 3074.29 13874.75 00:17:30.030 ======================================================== 00:17:30.030 Total : 9110.24 35.59 7026.32 3074.29 13874.75 00:17:30.030 00:17:30.030 14:36:36 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O5T78NB3Uf 00:17:30.030 14:36:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:30.030 14:36:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:30.030 14:36:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:30.030 14:36:36 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O5T78NB3Uf' 00:17:30.030 14:36:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.030 14:36:36 -- target/tls.sh@28 -- # bdevperf_pid=69580 00:17:30.030 14:36:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.030 14:36:36 -- target/tls.sh@31 -- # waitforlisten 69580 /var/tmp/bdevperf.sock 00:17:30.030 14:36:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.030 14:36:36 -- common/autotest_common.sh@817 -- # '[' -z 69580 ']' 00:17:30.030 14:36:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.030 14:36:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.030 14:36:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.030 14:36:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.030 14:36:36 -- common/autotest_common.sh@10 -- # set +x 00:17:30.030 [2024-04-17 14:36:36.461791] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:30.030 [2024-04-17 14:36:36.461877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69580 ] 00:17:30.030 [2024-04-17 14:36:36.602074] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.030 [2024-04-17 14:36:36.669265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.030 14:36:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:30.030 14:36:37 -- common/autotest_common.sh@850 -- # return 0 00:17:30.030 14:36:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O5T78NB3Uf 00:17:30.030 [2024-04-17 14:36:37.660039] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.030 [2024-04-17 14:36:37.660150] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:30.030 TLSTESTn1 00:17:30.030 14:36:37 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:30.030 Running I/O for 10 seconds... 00:17:40.040 00:17:40.040 Latency(us) 00:17:40.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.040 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:40.040 Verification LBA range: start 0x0 length 0x2000 00:17:40.040 TLSTESTn1 : 10.02 3799.06 14.84 0.00 0.00 33618.61 7804.74 40036.54 00:17:40.040 =================================================================================================================== 00:17:40.040 Total : 3799.06 14.84 0.00 0.00 33618.61 7804.74 40036.54 00:17:40.040 0 00:17:40.040 14:36:47 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.040 14:36:47 -- target/tls.sh@45 -- # killprocess 69580 00:17:40.040 14:36:47 -- common/autotest_common.sh@936 -- # '[' -z 69580 ']' 00:17:40.040 14:36:47 -- common/autotest_common.sh@940 -- # kill -0 69580 00:17:40.040 14:36:47 -- common/autotest_common.sh@941 -- # uname 00:17:40.040 14:36:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.040 14:36:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69580 00:17:40.040 killing process with pid 69580 00:17:40.040 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.040 00:17:40.040 Latency(us) 00:17:40.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.040 =================================================================================================================== 00:17:40.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.040 14:36:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.040 14:36:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.040 14:36:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69580' 00:17:40.040 14:36:47 -- common/autotest_common.sh@955 -- # kill 69580 00:17:40.040 [2024-04-17 14:36:47.938052] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:40.040 14:36:47 -- common/autotest_common.sh@960 -- # wait 69580 00:17:40.040 14:36:48 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tnwllK4shI 00:17:40.040 14:36:48 -- common/autotest_common.sh@638 -- # local es=0 00:17:40.040 14:36:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tnwllK4shI 00:17:40.040 14:36:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:40.040 14:36:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.040 14:36:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:40.040 14:36:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.041 14:36:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tnwllK4shI 00:17:40.041 14:36:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.041 14:36:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.041 14:36:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.041 14:36:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tnwllK4shI' 00:17:40.041 14:36:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.041 14:36:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.041 14:36:48 -- target/tls.sh@28 -- # bdevperf_pid=69719 00:17:40.041 14:36:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.041 14:36:48 -- target/tls.sh@31 -- # waitforlisten 69719 /var/tmp/bdevperf.sock 00:17:40.041 14:36:48 -- common/autotest_common.sh@817 -- # '[' -z 69719 ']' 00:17:40.041 14:36:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.041 14:36:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.041 14:36:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.041 14:36:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.041 14:36:48 -- common/autotest_common.sh@10 -- # set +x 00:17:40.041 [2024-04-17 14:36:48.177457] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:40.041 [2024-04-17 14:36:48.177732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69719 ] 00:17:40.041 [2024-04-17 14:36:48.311291] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.041 [2024-04-17 14:36:48.369810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.041 14:36:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.041 14:36:48 -- common/autotest_common.sh@850 -- # return 0 00:17:40.041 14:36:48 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tnwllK4shI 00:17:40.299 [2024-04-17 14:36:48.787305] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.299 [2024-04-17 14:36:48.787423] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.299 [2024-04-17 14:36:48.795508] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:40.299 [2024-04-17 14:36:48.796038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118ea80 (107): Transport endpoint is not connected 00:17:40.299 [2024-04-17 14:36:48.797027] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118ea80 (9): Bad file descriptor 00:17:40.299 [2024-04-17 14:36:48.798022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:40.299 [2024-04-17 14:36:48.798046] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:40.299 [2024-04-17 14:36:48.798059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.299 request: 00:17:40.299 { 00:17:40.299 "name": "TLSTEST", 00:17:40.299 "trtype": "tcp", 00:17:40.299 "traddr": "10.0.0.2", 00:17:40.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.299 "adrfam": "ipv4", 00:17:40.299 "trsvcid": "4420", 00:17:40.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.299 "psk": "/tmp/tmp.tnwllK4shI", 00:17:40.299 "method": "bdev_nvme_attach_controller", 00:17:40.299 "req_id": 1 00:17:40.299 } 00:17:40.299 Got JSON-RPC error response 00:17:40.299 response: 00:17:40.299 { 00:17:40.299 "code": -32602, 00:17:40.299 "message": "Invalid parameters" 00:17:40.299 } 00:17:40.299 14:36:48 -- target/tls.sh@36 -- # killprocess 69719 00:17:40.299 14:36:48 -- common/autotest_common.sh@936 -- # '[' -z 69719 ']' 00:17:40.299 14:36:48 -- common/autotest_common.sh@940 -- # kill -0 69719 00:17:40.299 14:36:48 -- common/autotest_common.sh@941 -- # uname 00:17:40.299 14:36:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.299 14:36:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69719 00:17:40.299 14:36:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.299 14:36:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.299 14:36:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69719' 00:17:40.299 killing process with pid 69719 00:17:40.299 14:36:48 -- common/autotest_common.sh@955 -- # kill 69719 00:17:40.299 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.299 00:17:40.299 Latency(us) 00:17:40.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.299 =================================================================================================================== 00:17:40.299 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.299 14:36:48 -- common/autotest_common.sh@960 -- # wait 69719 00:17:40.299 [2024-04-17 14:36:48.844658] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:40.558 14:36:49 -- target/tls.sh@37 -- # return 1 00:17:40.558 14:36:49 -- common/autotest_common.sh@641 -- # es=1 00:17:40.558 14:36:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:40.558 14:36:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:40.558 14:36:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:40.558 14:36:49 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O5T78NB3Uf 00:17:40.558 14:36:49 -- common/autotest_common.sh@638 -- # local es=0 00:17:40.558 14:36:49 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O5T78NB3Uf 00:17:40.558 14:36:49 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:40.558 14:36:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.558 14:36:49 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:40.558 14:36:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.558 14:36:49 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O5T78NB3Uf 00:17:40.558 14:36:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.558 14:36:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.558 14:36:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:40.558 14:36:49 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O5T78NB3Uf' 00:17:40.558 14:36:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.558 14:36:49 -- target/tls.sh@28 -- # bdevperf_pid=69739 00:17:40.558 14:36:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.558 14:36:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.558 14:36:49 -- target/tls.sh@31 -- # waitforlisten 69739 /var/tmp/bdevperf.sock 00:17:40.558 14:36:49 -- common/autotest_common.sh@817 -- # '[' -z 69739 ']' 00:17:40.558 14:36:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.558 14:36:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.558 14:36:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.558 14:36:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.558 14:36:49 -- common/autotest_common.sh@10 -- # set +x 00:17:40.558 [2024-04-17 14:36:49.088636] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:40.558 [2024-04-17 14:36:49.088967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69739 ] 00:17:40.816 [2024-04-17 14:36:49.225817] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.816 [2024-04-17 14:36:49.284384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.793 14:36:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.793 14:36:50 -- common/autotest_common.sh@850 -- # return 0 00:17:41.793 14:36:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.O5T78NB3Uf 00:17:41.793 [2024-04-17 14:36:50.337337] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.793 [2024-04-17 14:36:50.337713] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.793 [2024-04-17 14:36:50.345572] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:41.793 [2024-04-17 14:36:50.345801] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:41.793 [2024-04-17 14:36:50.345993] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:41.793 [2024-04-17 14:36:50.346483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2186a80 (107): Transport endpoint is not connected 00:17:41.793 [2024-04-17 14:36:50.347469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2186a80 (9): Bad file descriptor 00:17:41.793 [2024-04-17 14:36:50.348465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:41.793 [2024-04-17 14:36:50.348491] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:41.793 [2024-04-17 14:36:50.348505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:41.793 request: 00:17:41.793 { 00:17:41.793 "name": "TLSTEST", 00:17:41.793 "trtype": "tcp", 00:17:41.793 "traddr": "10.0.0.2", 00:17:41.793 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:41.793 "adrfam": "ipv4", 00:17:41.793 "trsvcid": "4420", 00:17:41.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.793 "psk": "/tmp/tmp.O5T78NB3Uf", 00:17:41.793 "method": "bdev_nvme_attach_controller", 00:17:41.793 "req_id": 1 00:17:41.793 } 00:17:41.793 Got JSON-RPC error response 00:17:41.793 response: 00:17:41.793 { 00:17:41.793 "code": -32602, 00:17:41.793 "message": "Invalid parameters" 00:17:41.793 } 00:17:41.793 14:36:50 -- target/tls.sh@36 -- # killprocess 69739 00:17:41.793 14:36:50 -- common/autotest_common.sh@936 -- # '[' -z 69739 ']' 00:17:41.793 14:36:50 -- common/autotest_common.sh@940 -- # kill -0 69739 00:17:41.793 14:36:50 -- common/autotest_common.sh@941 -- # uname 00:17:41.793 14:36:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.793 14:36:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69739 00:17:41.793 killing process with pid 69739 00:17:41.793 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.793 00:17:41.793 Latency(us) 00:17:41.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.793 =================================================================================================================== 00:17:41.793 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.793 14:36:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:41.793 14:36:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:41.793 14:36:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69739' 00:17:41.793 14:36:50 -- common/autotest_common.sh@955 -- # kill 69739 00:17:41.793 [2024-04-17 14:36:50.393802] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:41.793 14:36:50 -- common/autotest_common.sh@960 -- # wait 69739 00:17:42.052 14:36:50 -- target/tls.sh@37 -- # return 1 00:17:42.052 14:36:50 -- common/autotest_common.sh@641 -- # es=1 00:17:42.052 14:36:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:42.052 14:36:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:42.052 14:36:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:42.052 14:36:50 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O5T78NB3Uf 00:17:42.052 14:36:50 -- common/autotest_common.sh@638 -- # local es=0 00:17:42.052 14:36:50 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O5T78NB3Uf 00:17:42.052 14:36:50 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:42.052 14:36:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:42.052 14:36:50 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:42.052 14:36:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:42.052 14:36:50 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O5T78NB3Uf 00:17:42.052 14:36:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:42.052 14:36:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:42.052 14:36:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:42.052 14:36:50 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O5T78NB3Uf' 00:17:42.052 14:36:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.052 14:36:50 -- target/tls.sh@28 -- # bdevperf_pid=69761 00:17:42.052 14:36:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.052 14:36:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.052 14:36:50 -- target/tls.sh@31 -- # waitforlisten 69761 /var/tmp/bdevperf.sock 00:17:42.052 14:36:50 -- common/autotest_common.sh@817 -- # '[' -z 69761 ']' 00:17:42.052 14:36:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.052 14:36:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.052 14:36:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.052 14:36:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.052 14:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:42.052 [2024-04-17 14:36:50.630902] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:42.052 [2024-04-17 14:36:50.631205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69761 ] 00:17:42.310 [2024-04-17 14:36:50.764361] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.310 [2024-04-17 14:36:50.823965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.244 14:36:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.244 14:36:51 -- common/autotest_common.sh@850 -- # return 0 00:17:43.244 14:36:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O5T78NB3Uf 00:17:43.244 [2024-04-17 14:36:51.841635] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.244 [2024-04-17 14:36:51.842021] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:43.502 [2024-04-17 14:36:51.848379] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:43.503 [2024-04-17 14:36:51.848422] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:43.503 [2024-04-17 14:36:51.848479] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:43.503 [2024-04-17 14:36:51.848915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123fa80 (107): Transport endpoint is not connected 00:17:43.503 [2024-04-17 14:36:51.849888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123fa80 (9): Bad file descriptor 00:17:43.503 [2024-04-17 14:36:51.850882] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:43.503 [2024-04-17 14:36:51.850920] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:43.503 [2024-04-17 14:36:51.850937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:43.503 request: 00:17:43.503 { 00:17:43.503 "name": "TLSTEST", 00:17:43.503 "trtype": "tcp", 00:17:43.503 "traddr": "10.0.0.2", 00:17:43.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.503 "adrfam": "ipv4", 00:17:43.503 "trsvcid": "4420", 00:17:43.503 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:43.503 "psk": "/tmp/tmp.O5T78NB3Uf", 00:17:43.503 "method": "bdev_nvme_attach_controller", 00:17:43.503 "req_id": 1 00:17:43.503 } 00:17:43.503 Got JSON-RPC error response 00:17:43.503 response: 00:17:43.503 { 00:17:43.503 "code": -32602, 00:17:43.503 "message": "Invalid parameters" 00:17:43.503 } 00:17:43.503 14:36:51 -- target/tls.sh@36 -- # killprocess 69761 00:17:43.503 14:36:51 -- common/autotest_common.sh@936 -- # '[' -z 69761 ']' 00:17:43.503 14:36:51 -- common/autotest_common.sh@940 -- # kill -0 69761 00:17:43.503 14:36:51 -- common/autotest_common.sh@941 -- # uname 00:17:43.503 14:36:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.503 14:36:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69761 00:17:43.503 killing process with pid 69761 00:17:43.503 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.503 00:17:43.503 Latency(us) 00:17:43.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.503 =================================================================================================================== 00:17:43.503 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.503 14:36:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:43.503 14:36:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:43.503 14:36:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69761' 00:17:43.503 14:36:51 -- common/autotest_common.sh@955 -- # kill 69761 00:17:43.503 [2024-04-17 14:36:51.895063] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:43.503 14:36:51 -- common/autotest_common.sh@960 -- # wait 69761 00:17:43.503 14:36:52 -- target/tls.sh@37 -- # return 1 00:17:43.503 14:36:52 -- common/autotest_common.sh@641 -- # es=1 00:17:43.503 14:36:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:43.503 14:36:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:43.503 14:36:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:43.503 14:36:52 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.503 14:36:52 -- common/autotest_common.sh@638 -- # local es=0 00:17:43.503 14:36:52 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.503 14:36:52 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:43.503 14:36:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:43.503 14:36:52 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:43.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.503 14:36:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:43.503 14:36:52 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.503 14:36:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.503 14:36:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.503 14:36:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:43.503 14:36:52 -- target/tls.sh@23 -- # psk= 00:17:43.503 14:36:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.503 14:36:52 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.503 14:36:52 -- target/tls.sh@28 -- # bdevperf_pid=69794 00:17:43.503 14:36:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.503 14:36:52 -- target/tls.sh@31 -- # waitforlisten 69794 /var/tmp/bdevperf.sock 00:17:43.503 14:36:52 -- common/autotest_common.sh@817 -- # '[' -z 69794 ']' 00:17:43.503 14:36:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.503 14:36:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:43.503 14:36:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.503 14:36:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:43.503 14:36:52 -- common/autotest_common.sh@10 -- # set +x 00:17:43.761 [2024-04-17 14:36:52.143059] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:43.761 [2024-04-17 14:36:52.143393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69794 ] 00:17:43.761 [2024-04-17 14:36:52.284032] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.020 [2024-04-17 14:36:52.369743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.602 14:36:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:44.602 14:36:53 -- common/autotest_common.sh@850 -- # return 0 00:17:44.602 14:36:53 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:44.860 [2024-04-17 14:36:53.396001] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:44.860 [2024-04-17 14:36:53.398160] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacedc0 (9): Bad file descriptor 00:17:44.860 [2024-04-17 14:36:53.399156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:44.860 [2024-04-17 14:36:53.399304] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:44.860 [2024-04-17 14:36:53.399410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:44.860 request: 00:17:44.860 { 00:17:44.860 "name": "TLSTEST", 00:17:44.860 "trtype": "tcp", 00:17:44.860 "traddr": "10.0.0.2", 00:17:44.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.860 "adrfam": "ipv4", 00:17:44.860 "trsvcid": "4420", 00:17:44.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.860 "method": "bdev_nvme_attach_controller", 00:17:44.860 "req_id": 1 00:17:44.860 } 00:17:44.860 Got JSON-RPC error response 00:17:44.860 response: 00:17:44.860 { 00:17:44.860 "code": -32602, 00:17:44.860 "message": "Invalid parameters" 00:17:44.860 } 00:17:44.860 14:36:53 -- target/tls.sh@36 -- # killprocess 69794 00:17:44.860 14:36:53 -- common/autotest_common.sh@936 -- # '[' -z 69794 ']' 00:17:44.860 14:36:53 -- common/autotest_common.sh@940 -- # kill -0 69794 00:17:44.860 14:36:53 -- common/autotest_common.sh@941 -- # uname 00:17:44.860 14:36:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.860 14:36:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69794 00:17:44.860 killing process with pid 69794 00:17:44.860 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.860 00:17:44.860 Latency(us) 00:17:44.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.860 =================================================================================================================== 00:17:44.860 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:44.860 14:36:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:44.860 14:36:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:44.860 14:36:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69794' 00:17:44.860 14:36:53 -- common/autotest_common.sh@955 -- # kill 69794 00:17:44.860 14:36:53 -- common/autotest_common.sh@960 -- # wait 69794 00:17:45.147 14:36:53 -- target/tls.sh@37 -- # return 1 00:17:45.147 14:36:53 -- common/autotest_common.sh@641 -- # es=1 00:17:45.147 14:36:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:45.147 14:36:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:45.147 14:36:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:45.147 14:36:53 -- target/tls.sh@158 -- # killprocess 69348 00:17:45.147 14:36:53 -- common/autotest_common.sh@936 -- # '[' -z 69348 ']' 00:17:45.147 14:36:53 -- common/autotest_common.sh@940 -- # kill -0 69348 00:17:45.147 14:36:53 -- common/autotest_common.sh@941 -- # uname 00:17:45.147 14:36:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.147 14:36:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69348 00:17:45.147 killing process with pid 69348 00:17:45.147 14:36:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:45.147 14:36:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:45.147 14:36:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69348' 00:17:45.147 14:36:53 -- common/autotest_common.sh@955 -- # kill 69348 00:17:45.147 [2024-04-17 14:36:53.651100] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:45.147 14:36:53 -- common/autotest_common.sh@960 -- # wait 69348 00:17:45.437 14:36:53 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:45.437 14:36:53 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:45.437 14:36:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:45.437 14:36:53 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:45.437 14:36:53 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:45.437 14:36:53 -- nvmf/common.sh@693 -- # digest=2 00:17:45.437 14:36:53 -- nvmf/common.sh@694 -- # python - 00:17:45.437 14:36:53 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:45.437 14:36:53 -- target/tls.sh@160 -- # mktemp 00:17:45.437 14:36:53 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.XJGziDUVsy 00:17:45.437 14:36:53 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:45.437 14:36:53 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.XJGziDUVsy 00:17:45.437 14:36:53 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:45.437 14:36:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:45.437 14:36:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:45.437 14:36:53 -- common/autotest_common.sh@10 -- # set +x 00:17:45.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.437 14:36:53 -- nvmf/common.sh@470 -- # nvmfpid=69826 00:17:45.437 14:36:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:45.437 14:36:53 -- nvmf/common.sh@471 -- # waitforlisten 69826 00:17:45.437 14:36:53 -- common/autotest_common.sh@817 -- # '[' -z 69826 ']' 00:17:45.437 14:36:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.437 14:36:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:45.437 14:36:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.437 14:36:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:45.437 14:36:53 -- common/autotest_common.sh@10 -- # set +x 00:17:45.438 [2024-04-17 14:36:53.957282] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:45.438 [2024-04-17 14:36:53.957572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.696 [2024-04-17 14:36:54.096316] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.696 [2024-04-17 14:36:54.196684] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.696 [2024-04-17 14:36:54.196982] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.696 [2024-04-17 14:36:54.197225] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.696 [2024-04-17 14:36:54.197444] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.696 [2024-04-17 14:36:54.197652] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.696 [2024-04-17 14:36:54.197849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.630 14:36:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:46.630 14:36:54 -- common/autotest_common.sh@850 -- # return 0 00:17:46.630 14:36:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:46.630 14:36:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:46.630 14:36:54 -- common/autotest_common.sh@10 -- # set +x 00:17:46.630 14:36:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.630 14:36:54 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.XJGziDUVsy 00:17:46.630 14:36:54 -- target/tls.sh@49 -- # local key=/tmp/tmp.XJGziDUVsy 00:17:46.630 14:36:54 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:46.630 [2024-04-17 14:36:55.160281] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.630 14:36:55 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:46.908 14:36:55 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:47.167 [2024-04-17 14:36:55.724404] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.167 [2024-04-17 14:36:55.724652] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.167 14:36:55 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:47.425 malloc0 00:17:47.425 14:36:56 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:47.684 14:36:56 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJGziDUVsy 00:17:47.943 [2024-04-17 14:36:56.443287] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:47.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.943 14:36:56 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XJGziDUVsy 00:17:47.943 14:36:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.943 14:36:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.943 14:36:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.943 14:36:56 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XJGziDUVsy' 00:17:47.943 14:36:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.943 14:36:56 -- target/tls.sh@28 -- # bdevperf_pid=69885 00:17:47.943 14:36:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.943 14:36:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.943 14:36:56 -- target/tls.sh@31 -- # waitforlisten 69885 /var/tmp/bdevperf.sock 00:17:47.944 14:36:56 -- common/autotest_common.sh@817 -- # '[' -z 69885 ']' 00:17:47.944 14:36:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.944 14:36:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:47.944 14:36:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.944 14:36:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:47.944 14:36:56 -- common/autotest_common.sh@10 -- # set +x 00:17:47.944 [2024-04-17 14:36:56.504326] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:47.944 [2024-04-17 14:36:56.504660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69885 ] 00:17:48.201 [2024-04-17 14:36:56.641835] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.201 [2024-04-17 14:36:56.709038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.138 14:36:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:49.138 14:36:57 -- common/autotest_common.sh@850 -- # return 0 00:17:49.138 14:36:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJGziDUVsy 00:17:49.396 [2024-04-17 14:36:57.772722] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.396 [2024-04-17 14:36:57.773152] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:49.396 TLSTESTn1 00:17:49.396 14:36:57 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:49.396 Running I/O for 10 seconds... 00:17:59.430 00:17:59.430 Latency(us) 00:17:59.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.430 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.430 Verification LBA range: start 0x0 length 0x2000 00:17:59.430 TLSTESTn1 : 10.02 3655.41 14.28 0.00 0.00 34947.26 7119.59 36461.85 00:17:59.430 =================================================================================================================== 00:17:59.430 Total : 3655.41 14.28 0.00 0.00 34947.26 7119.59 36461.85 00:17:59.430 0 00:17:59.430 14:37:07 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.430 14:37:07 -- target/tls.sh@45 -- # killprocess 69885 00:17:59.430 14:37:07 -- common/autotest_common.sh@936 -- # '[' -z 69885 ']' 00:17:59.430 14:37:07 -- common/autotest_common.sh@940 -- # kill -0 69885 00:17:59.430 14:37:07 -- common/autotest_common.sh@941 -- # uname 00:17:59.430 14:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.430 14:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69885 00:17:59.430 killing process with pid 69885 00:17:59.430 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.430 00:17:59.430 Latency(us) 00:17:59.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.430 =================================================================================================================== 00:17:59.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.430 14:37:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:59.430 14:37:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:59.430 14:37:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69885' 00:17:59.430 14:37:08 -- common/autotest_common.sh@955 -- # kill 69885 00:17:59.430 [2024-04-17 14:37:08.024617] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:59.430 14:37:08 -- common/autotest_common.sh@960 -- # wait 69885 00:17:59.688 14:37:08 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.XJGziDUVsy 00:17:59.688 14:37:08 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XJGziDUVsy 00:17:59.688 14:37:08 -- common/autotest_common.sh@638 -- # local es=0 00:17:59.688 14:37:08 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XJGziDUVsy 00:17:59.688 14:37:08 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:59.688 14:37:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:59.688 14:37:08 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:59.688 14:37:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:59.688 14:37:08 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XJGziDUVsy 00:17:59.688 14:37:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.688 14:37:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.688 14:37:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.688 14:37:08 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XJGziDUVsy' 00:17:59.688 14:37:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.688 14:37:08 -- target/tls.sh@28 -- # bdevperf_pid=70015 00:17:59.688 14:37:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.688 14:37:08 -- target/tls.sh@31 -- # waitforlisten 70015 /var/tmp/bdevperf.sock 00:17:59.688 14:37:08 -- common/autotest_common.sh@817 -- # '[' -z 70015 ']' 00:17:59.688 14:37:08 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.688 14:37:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.688 14:37:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.688 14:37:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.688 14:37:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.688 14:37:08 -- common/autotest_common.sh@10 -- # set +x 00:17:59.688 [2024-04-17 14:37:08.268248] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:59.688 [2024-04-17 14:37:08.268537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70015 ] 00:17:59.947 [2024-04-17 14:37:08.407740] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.947 [2024-04-17 14:37:08.469404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.947 14:37:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:59.947 14:37:08 -- common/autotest_common.sh@850 -- # return 0 00:17:59.947 14:37:08 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJGziDUVsy 00:18:00.205 [2024-04-17 14:37:08.758989] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.205 [2024-04-17 14:37:08.759246] bdev_nvme.c:6046:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:00.205 [2024-04-17 14:37:08.759357] bdev_nvme.c:6155:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.XJGziDUVsy 00:18:00.205 request: 00:18:00.205 { 00:18:00.205 "name": "TLSTEST", 00:18:00.205 "trtype": "tcp", 00:18:00.205 "traddr": "10.0.0.2", 00:18:00.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.205 "adrfam": "ipv4", 00:18:00.205 "trsvcid": "4420", 00:18:00.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.205 "psk": "/tmp/tmp.XJGziDUVsy", 00:18:00.205 "method": "bdev_nvme_attach_controller", 00:18:00.205 "req_id": 1 00:18:00.205 } 00:18:00.205 Got JSON-RPC error response 00:18:00.205 response: 00:18:00.205 { 00:18:00.205 "code": -1, 00:18:00.205 "message": "Operation not permitted" 00:18:00.205 } 00:18:00.205 14:37:08 -- target/tls.sh@36 -- # killprocess 70015 00:18:00.205 14:37:08 -- common/autotest_common.sh@936 -- # '[' -z 70015 ']' 00:18:00.205 14:37:08 -- common/autotest_common.sh@940 -- # kill -0 70015 00:18:00.205 14:37:08 -- common/autotest_common.sh@941 -- # uname 00:18:00.205 14:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.205 14:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70015 00:18:00.463 killing process with pid 70015 00:18:00.463 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.463 00:18:00.463 Latency(us) 00:18:00.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.463 =================================================================================================================== 00:18:00.463 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:00.463 14:37:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:00.463 14:37:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:00.463 14:37:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70015' 00:18:00.463 14:37:08 -- common/autotest_common.sh@955 -- # kill 70015 00:18:00.463 14:37:08 -- common/autotest_common.sh@960 -- # wait 70015 00:18:00.463 14:37:08 -- target/tls.sh@37 -- # return 1 00:18:00.463 14:37:08 -- common/autotest_common.sh@641 -- # es=1 00:18:00.463 14:37:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:00.463 14:37:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:00.463 14:37:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:00.463 14:37:08 -- target/tls.sh@174 -- # killprocess 69826 00:18:00.463 14:37:08 -- common/autotest_common.sh@936 -- # '[' -z 69826 ']' 00:18:00.463 14:37:08 -- common/autotest_common.sh@940 -- # kill -0 69826 00:18:00.463 14:37:08 -- common/autotest_common.sh@941 -- # uname 00:18:00.463 14:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.463 14:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69826 00:18:00.463 killing process with pid 69826 00:18:00.463 14:37:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:00.463 14:37:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:00.463 14:37:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69826' 00:18:00.463 14:37:09 -- common/autotest_common.sh@955 -- # kill 69826 00:18:00.463 [2024-04-17 14:37:09.008782] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:00.463 14:37:09 -- common/autotest_common.sh@960 -- # wait 69826 00:18:00.720 14:37:09 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:00.721 14:37:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:00.721 14:37:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:00.721 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:18:00.721 14:37:09 -- nvmf/common.sh@470 -- # nvmfpid=70040 00:18:00.721 14:37:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.721 14:37:09 -- nvmf/common.sh@471 -- # waitforlisten 70040 00:18:00.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.721 14:37:09 -- common/autotest_common.sh@817 -- # '[' -z 70040 ']' 00:18:00.721 14:37:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.721 14:37:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.721 14:37:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.721 14:37:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.721 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:18:00.721 [2024-04-17 14:37:09.254362] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:00.721 [2024-04-17 14:37:09.254451] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.978 [2024-04-17 14:37:09.391316] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.978 [2024-04-17 14:37:09.479239] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.978 [2024-04-17 14:37:09.479301] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.978 [2024-04-17 14:37:09.479314] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.978 [2024-04-17 14:37:09.479324] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.978 [2024-04-17 14:37:09.479334] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.978 [2024-04-17 14:37:09.479370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.979 14:37:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.979 14:37:09 -- common/autotest_common.sh@850 -- # return 0 00:18:00.979 14:37:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:00.979 14:37:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:00.979 14:37:09 -- common/autotest_common.sh@10 -- # set +x 00:18:01.237 14:37:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.238 14:37:09 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.XJGziDUVsy 00:18:01.238 14:37:09 -- common/autotest_common.sh@638 -- # local es=0 00:18:01.238 14:37:09 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XJGziDUVsy 00:18:01.238 14:37:09 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:01.238 14:37:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:01.238 14:37:09 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:01.238 14:37:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:01.238 14:37:09 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.XJGziDUVsy 00:18:01.238 14:37:09 -- target/tls.sh@49 -- # local key=/tmp/tmp.XJGziDUVsy 00:18:01.238 14:37:09 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:01.496 [2024-04-17 14:37:09.867680] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.496 14:37:09 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.754 14:37:10 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:02.012 [2024-04-17 14:37:10.387737] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.012 [2024-04-17 14:37:10.387946] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.012 14:37:10 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:02.269 malloc0 00:18:02.269 14:37:10 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.527 14:37:10 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJGziDUVsy 00:18:02.785 [2024-04-17 14:37:11.204734] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:02.785 [2024-04-17 14:37:11.204786] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:02.785 [2024-04-17 14:37:11.204813] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:02.785 request: 00:18:02.785 { 00:18:02.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.785 "host": "nqn.2016-06.io.spdk:host1", 00:18:02.785 "psk": "/tmp/tmp.XJGziDUVsy", 00:18:02.785 "method": "nvmf_subsystem_add_host", 00:18:02.785 "req_id": 1 00:18:02.785 } 00:18:02.785 Got JSON-RPC error response 00:18:02.785 response: 00:18:02.785 { 00:18:02.785 "code": -32603, 00:18:02.785 "message": "Internal error" 00:18:02.785 } 00:18:02.785 14:37:11 -- common/autotest_common.sh@641 -- # es=1 00:18:02.785 14:37:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:02.785 14:37:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:02.785 14:37:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:02.785 14:37:11 -- target/tls.sh@180 -- # killprocess 70040 00:18:02.785 14:37:11 -- common/autotest_common.sh@936 -- # '[' -z 70040 ']' 00:18:02.785 14:37:11 -- common/autotest_common.sh@940 -- # kill -0 70040 00:18:02.785 14:37:11 -- common/autotest_common.sh@941 -- # uname 00:18:02.785 14:37:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:02.785 14:37:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70040 00:18:02.785 killing process with pid 70040 00:18:02.785 14:37:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:02.785 14:37:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:02.785 14:37:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70040' 00:18:02.785 14:37:11 -- common/autotest_common.sh@955 -- # kill 70040 00:18:02.785 14:37:11 -- common/autotest_common.sh@960 -- # wait 70040 00:18:03.043 14:37:11 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.XJGziDUVsy 00:18:03.043 14:37:11 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:03.043 14:37:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:03.043 14:37:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:03.043 14:37:11 -- common/autotest_common.sh@10 -- # set +x 00:18:03.043 14:37:11 -- nvmf/common.sh@470 -- # nvmfpid=70101 00:18:03.043 14:37:11 -- nvmf/common.sh@471 -- # waitforlisten 70101 00:18:03.043 14:37:11 -- common/autotest_common.sh@817 -- # '[' -z 70101 ']' 00:18:03.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.043 14:37:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.043 14:37:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:03.043 14:37:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:03.043 14:37:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.043 14:37:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:03.043 14:37:11 -- common/autotest_common.sh@10 -- # set +x 00:18:03.043 [2024-04-17 14:37:11.516203] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:03.043 [2024-04-17 14:37:11.516363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.301 [2024-04-17 14:37:11.660547] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.301 [2024-04-17 14:37:11.729304] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.301 [2024-04-17 14:37:11.729377] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.301 [2024-04-17 14:37:11.729391] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.301 [2024-04-17 14:37:11.729399] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.301 [2024-04-17 14:37:11.729407] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.301 [2024-04-17 14:37:11.729433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.250 14:37:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:04.250 14:37:12 -- common/autotest_common.sh@850 -- # return 0 00:18:04.250 14:37:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:04.250 14:37:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:04.250 14:37:12 -- common/autotest_common.sh@10 -- # set +x 00:18:04.250 14:37:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.250 14:37:12 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.XJGziDUVsy 00:18:04.250 14:37:12 -- target/tls.sh@49 -- # local key=/tmp/tmp.XJGziDUVsy 00:18:04.250 14:37:12 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.250 [2024-04-17 14:37:12.784340] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.250 14:37:12 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:04.509 14:37:13 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:04.766 [2024-04-17 14:37:13.356463] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.766 [2024-04-17 14:37:13.356688] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.023 14:37:13 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:05.281 malloc0 00:18:05.281 14:37:13 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:05.539 14:37:13 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJGziDUVsy 00:18:05.797 [2024-04-17 14:37:14.179320] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:05.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.797 14:37:14 -- target/tls.sh@188 -- # bdevperf_pid=70150 00:18:05.797 14:37:14 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.797 14:37:14 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.797 14:37:14 -- target/tls.sh@191 -- # waitforlisten 70150 /var/tmp/bdevperf.sock 00:18:05.797 14:37:14 -- common/autotest_common.sh@817 -- # '[' -z 70150 ']' 00:18:05.797 14:37:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.797 14:37:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:05.797 14:37:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.797 14:37:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:05.797 14:37:14 -- common/autotest_common.sh@10 -- # set +x 00:18:05.797 [2024-04-17 14:37:14.248653] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:05.797 [2024-04-17 14:37:14.248754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70150 ] 00:18:05.797 [2024-04-17 14:37:14.387162] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.055 [2024-04-17 14:37:14.456704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.988 14:37:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.988 14:37:15 -- common/autotest_common.sh@850 -- # return 0 00:18:06.988 14:37:15 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJGziDUVsy 00:18:06.988 [2024-04-17 14:37:15.444935] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.988 [2024-04-17 14:37:15.445343] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:06.988 TLSTESTn1 00:18:06.988 14:37:15 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:07.578 14:37:15 -- target/tls.sh@196 -- # tgtconf='{ 00:18:07.578 "subsystems": [ 00:18:07.578 { 00:18:07.578 "subsystem": "keyring", 00:18:07.578 "config": [] 00:18:07.578 }, 00:18:07.578 { 00:18:07.578 "subsystem": "iobuf", 00:18:07.578 "config": [ 00:18:07.578 { 00:18:07.578 "method": "iobuf_set_options", 00:18:07.578 "params": { 00:18:07.578 "small_pool_count": 8192, 00:18:07.578 "large_pool_count": 1024, 00:18:07.578 "small_bufsize": 8192, 00:18:07.578 "large_bufsize": 135168 00:18:07.578 } 00:18:07.578 } 00:18:07.578 ] 00:18:07.578 }, 00:18:07.578 { 00:18:07.578 "subsystem": "sock", 00:18:07.578 "config": [ 00:18:07.578 { 00:18:07.578 "method": "sock_impl_set_options", 00:18:07.578 "params": { 00:18:07.578 "impl_name": "uring", 00:18:07.578 "recv_buf_size": 2097152, 00:18:07.578 "send_buf_size": 2097152, 00:18:07.578 "enable_recv_pipe": true, 00:18:07.578 "enable_quickack": false, 00:18:07.578 "enable_placement_id": 0, 00:18:07.578 "enable_zerocopy_send_server": false, 00:18:07.578 "enable_zerocopy_send_client": false, 00:18:07.579 "zerocopy_threshold": 0, 00:18:07.579 "tls_version": 0, 00:18:07.579 "enable_ktls": false 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "sock_impl_set_options", 00:18:07.579 "params": { 00:18:07.579 "impl_name": "posix", 00:18:07.579 "recv_buf_size": 2097152, 00:18:07.579 "send_buf_size": 2097152, 00:18:07.579 "enable_recv_pipe": true, 00:18:07.579 "enable_quickack": false, 00:18:07.579 "enable_placement_id": 0, 00:18:07.579 "enable_zerocopy_send_server": true, 00:18:07.579 "enable_zerocopy_send_client": false, 00:18:07.579 "zerocopy_threshold": 0, 00:18:07.579 "tls_version": 0, 00:18:07.579 "enable_ktls": false 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "sock_impl_set_options", 00:18:07.579 "params": { 00:18:07.579 "impl_name": "ssl", 00:18:07.579 "recv_buf_size": 4096, 00:18:07.579 "send_buf_size": 4096, 00:18:07.579 "enable_recv_pipe": true, 00:18:07.579 "enable_quickack": false, 00:18:07.579 "enable_placement_id": 0, 00:18:07.579 "enable_zerocopy_send_server": true, 00:18:07.579 "enable_zerocopy_send_client": false, 00:18:07.579 "zerocopy_threshold": 0, 00:18:07.579 "tls_version": 0, 00:18:07.579 "enable_ktls": false 00:18:07.579 } 00:18:07.579 } 00:18:07.579 ] 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "subsystem": "vmd", 00:18:07.579 "config": [] 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "subsystem": "accel", 00:18:07.579 "config": [ 00:18:07.579 { 00:18:07.579 "method": "accel_set_options", 00:18:07.579 "params": { 00:18:07.579 "small_cache_size": 128, 00:18:07.579 "large_cache_size": 16, 00:18:07.579 "task_count": 2048, 00:18:07.579 "sequence_count": 2048, 00:18:07.579 "buf_count": 2048 00:18:07.579 } 00:18:07.579 } 00:18:07.579 ] 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "subsystem": "bdev", 00:18:07.579 "config": [ 00:18:07.579 { 00:18:07.579 "method": "bdev_set_options", 00:18:07.579 "params": { 00:18:07.579 "bdev_io_pool_size": 65535, 00:18:07.579 "bdev_io_cache_size": 256, 00:18:07.579 "bdev_auto_examine": true, 00:18:07.579 "iobuf_small_cache_size": 128, 00:18:07.579 "iobuf_large_cache_size": 16 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "bdev_raid_set_options", 00:18:07.579 "params": { 00:18:07.579 "process_window_size_kb": 1024 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "bdev_iscsi_set_options", 00:18:07.579 "params": { 00:18:07.579 "timeout_sec": 30 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "bdev_nvme_set_options", 00:18:07.579 "params": { 00:18:07.579 "action_on_timeout": "none", 00:18:07.579 "timeout_us": 0, 00:18:07.579 "timeout_admin_us": 0, 00:18:07.579 "keep_alive_timeout_ms": 10000, 00:18:07.579 "arbitration_burst": 0, 00:18:07.579 "low_priority_weight": 0, 00:18:07.579 "medium_priority_weight": 0, 00:18:07.579 "high_priority_weight": 0, 00:18:07.579 "nvme_adminq_poll_period_us": 10000, 00:18:07.579 "nvme_ioq_poll_period_us": 0, 00:18:07.579 "io_queue_requests": 0, 00:18:07.579 "delay_cmd_submit": true, 00:18:07.579 "transport_retry_count": 4, 00:18:07.579 "bdev_retry_count": 3, 00:18:07.579 "transport_ack_timeout": 0, 00:18:07.579 "ctrlr_loss_timeout_sec": 0, 00:18:07.579 "reconnect_delay_sec": 0, 00:18:07.579 "fast_io_fail_timeout_sec": 0, 00:18:07.579 "disable_auto_failback": false, 00:18:07.579 "generate_uuids": false, 00:18:07.579 "transport_tos": 0, 00:18:07.579 "nvme_error_stat": false, 00:18:07.579 "rdma_srq_size": 0, 00:18:07.579 "io_path_stat": false, 00:18:07.579 "allow_accel_sequence": false, 00:18:07.579 "rdma_max_cq_size": 0, 00:18:07.579 "rdma_cm_event_timeout_ms": 0, 00:18:07.579 "dhchap_digests": [ 00:18:07.579 "sha256", 00:18:07.579 "sha384", 00:18:07.579 "sha512" 00:18:07.579 ], 00:18:07.579 "dhchap_dhgroups": [ 00:18:07.579 "null", 00:18:07.579 "ffdhe2048", 00:18:07.579 "ffdhe3072", 00:18:07.579 "ffdhe4096", 00:18:07.579 "ffdhe6144", 00:18:07.579 "ffdhe8192" 00:18:07.579 ] 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "bdev_nvme_set_hotplug", 00:18:07.579 "params": { 00:18:07.579 "period_us": 100000, 00:18:07.579 "enable": false 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "bdev_malloc_create", 00:18:07.579 "params": { 00:18:07.579 "name": "malloc0", 00:18:07.579 "num_blocks": 8192, 00:18:07.579 "block_size": 4096, 00:18:07.579 "physical_block_size": 4096, 00:18:07.579 "uuid": "e438695b-0e46-4048-905a-7629cb527dae", 00:18:07.579 "optimal_io_boundary": 0 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "bdev_wait_for_examine" 00:18:07.579 } 00:18:07.579 ] 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "subsystem": "nbd", 00:18:07.579 "config": [] 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "subsystem": "scheduler", 00:18:07.579 "config": [ 00:18:07.579 { 00:18:07.579 "method": "framework_set_scheduler", 00:18:07.579 "params": { 00:18:07.579 "name": "static" 00:18:07.579 } 00:18:07.579 } 00:18:07.579 ] 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "subsystem": "nvmf", 00:18:07.579 "config": [ 00:18:07.579 { 00:18:07.579 "method": "nvmf_set_config", 00:18:07.579 "params": { 00:18:07.579 "discovery_filter": "match_any", 00:18:07.579 "admin_cmd_passthru": { 00:18:07.579 "identify_ctrlr": false 00:18:07.579 } 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "nvmf_set_max_subsystems", 00:18:07.579 "params": { 00:18:07.579 "max_subsystems": 1024 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "nvmf_set_crdt", 00:18:07.579 "params": { 00:18:07.579 "crdt1": 0, 00:18:07.579 "crdt2": 0, 00:18:07.579 "crdt3": 0 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "nvmf_create_transport", 00:18:07.579 "params": { 00:18:07.579 "trtype": "TCP", 00:18:07.579 "max_queue_depth": 128, 00:18:07.579 "max_io_qpairs_per_ctrlr": 127, 00:18:07.579 "in_capsule_data_size": 4096, 00:18:07.579 "max_io_size": 131072, 00:18:07.579 "io_unit_size": 131072, 00:18:07.579 "max_aq_depth": 128, 00:18:07.579 "num_shared_buffers": 511, 00:18:07.579 "buf_cache_size": 4294967295, 00:18:07.579 "dif_insert_or_strip": false, 00:18:07.579 "zcopy": false, 00:18:07.579 "c2h_success": false, 00:18:07.579 "sock_priority": 0, 00:18:07.579 "abort_timeout_sec": 1, 00:18:07.579 "ack_timeout": 0 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "nvmf_create_subsystem", 00:18:07.579 "params": { 00:18:07.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.579 "allow_any_host": false, 00:18:07.579 "serial_number": "SPDK00000000000001", 00:18:07.579 "model_number": "SPDK bdev Controller", 00:18:07.579 "max_namespaces": 10, 00:18:07.579 "min_cntlid": 1, 00:18:07.579 "max_cntlid": 65519, 00:18:07.579 "ana_reporting": false 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "nvmf_subsystem_add_host", 00:18:07.579 "params": { 00:18:07.579 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.579 "host": "nqn.2016-06.io.spdk:host1", 00:18:07.579 "psk": "/tmp/tmp.XJGziDUVsy" 00:18:07.579 } 00:18:07.579 }, 00:18:07.579 { 00:18:07.579 "method": "nvmf_subsystem_add_ns", 00:18:07.580 "params": { 00:18:07.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.580 "namespace": { 00:18:07.580 "nsid": 1, 00:18:07.580 "bdev_name": "malloc0", 00:18:07.580 "nguid": "E438695B0E464048905A7629CB527DAE", 00:18:07.580 "uuid": "e438695b-0e46-4048-905a-7629cb527dae", 00:18:07.580 "no_auto_visible": false 00:18:07.580 } 00:18:07.580 } 00:18:07.580 }, 00:18:07.580 { 00:18:07.580 "method": "nvmf_subsystem_add_listener", 00:18:07.580 "params": { 00:18:07.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.580 "listen_address": { 00:18:07.580 "trtype": "TCP", 00:18:07.580 "adrfam": "IPv4", 00:18:07.580 "traddr": "10.0.0.2", 00:18:07.580 "trsvcid": "4420" 00:18:07.580 }, 00:18:07.580 "secure_channel": true 00:18:07.580 } 00:18:07.580 } 00:18:07.580 ] 00:18:07.580 } 00:18:07.580 ] 00:18:07.580 }' 00:18:07.580 14:37:15 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:07.839 14:37:16 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:07.839 "subsystems": [ 00:18:07.839 { 00:18:07.839 "subsystem": "keyring", 00:18:07.839 "config": [] 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "subsystem": "iobuf", 00:18:07.839 "config": [ 00:18:07.839 { 00:18:07.839 "method": "iobuf_set_options", 00:18:07.839 "params": { 00:18:07.839 "small_pool_count": 8192, 00:18:07.839 "large_pool_count": 1024, 00:18:07.839 "small_bufsize": 8192, 00:18:07.839 "large_bufsize": 135168 00:18:07.839 } 00:18:07.839 } 00:18:07.839 ] 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "subsystem": "sock", 00:18:07.839 "config": [ 00:18:07.839 { 00:18:07.839 "method": "sock_impl_set_options", 00:18:07.839 "params": { 00:18:07.839 "impl_name": "uring", 00:18:07.839 "recv_buf_size": 2097152, 00:18:07.839 "send_buf_size": 2097152, 00:18:07.839 "enable_recv_pipe": true, 00:18:07.839 "enable_quickack": false, 00:18:07.839 "enable_placement_id": 0, 00:18:07.839 "enable_zerocopy_send_server": false, 00:18:07.839 "enable_zerocopy_send_client": false, 00:18:07.839 "zerocopy_threshold": 0, 00:18:07.839 "tls_version": 0, 00:18:07.839 "enable_ktls": false 00:18:07.839 } 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "method": "sock_impl_set_options", 00:18:07.839 "params": { 00:18:07.839 "impl_name": "posix", 00:18:07.839 "recv_buf_size": 2097152, 00:18:07.839 "send_buf_size": 2097152, 00:18:07.839 "enable_recv_pipe": true, 00:18:07.839 "enable_quickack": false, 00:18:07.839 "enable_placement_id": 0, 00:18:07.839 "enable_zerocopy_send_server": true, 00:18:07.839 "enable_zerocopy_send_client": false, 00:18:07.839 "zerocopy_threshold": 0, 00:18:07.839 "tls_version": 0, 00:18:07.839 "enable_ktls": false 00:18:07.839 } 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "method": "sock_impl_set_options", 00:18:07.839 "params": { 00:18:07.839 "impl_name": "ssl", 00:18:07.839 "recv_buf_size": 4096, 00:18:07.839 "send_buf_size": 4096, 00:18:07.839 "enable_recv_pipe": true, 00:18:07.839 "enable_quickack": false, 00:18:07.839 "enable_placement_id": 0, 00:18:07.839 "enable_zerocopy_send_server": true, 00:18:07.839 "enable_zerocopy_send_client": false, 00:18:07.839 "zerocopy_threshold": 0, 00:18:07.839 "tls_version": 0, 00:18:07.839 "enable_ktls": false 00:18:07.839 } 00:18:07.839 } 00:18:07.839 ] 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "subsystem": "vmd", 00:18:07.839 "config": [] 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "subsystem": "accel", 00:18:07.839 "config": [ 00:18:07.839 { 00:18:07.839 "method": "accel_set_options", 00:18:07.839 "params": { 00:18:07.839 "small_cache_size": 128, 00:18:07.839 "large_cache_size": 16, 00:18:07.839 "task_count": 2048, 00:18:07.839 "sequence_count": 2048, 00:18:07.839 "buf_count": 2048 00:18:07.839 } 00:18:07.839 } 00:18:07.839 ] 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "subsystem": "bdev", 00:18:07.839 "config": [ 00:18:07.839 { 00:18:07.839 "method": "bdev_set_options", 00:18:07.839 "params": { 00:18:07.839 "bdev_io_pool_size": 65535, 00:18:07.839 "bdev_io_cache_size": 256, 00:18:07.839 "bdev_auto_examine": true, 00:18:07.839 "iobuf_small_cache_size": 128, 00:18:07.839 "iobuf_large_cache_size": 16 00:18:07.839 } 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "method": "bdev_raid_set_options", 00:18:07.839 "params": { 00:18:07.839 "process_window_size_kb": 1024 00:18:07.839 } 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "method": "bdev_iscsi_set_options", 00:18:07.839 "params": { 00:18:07.839 "timeout_sec": 30 00:18:07.839 } 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "method": "bdev_nvme_set_options", 00:18:07.839 "params": { 00:18:07.839 "action_on_timeout": "none", 00:18:07.839 "timeout_us": 0, 00:18:07.839 "timeout_admin_us": 0, 00:18:07.839 "keep_alive_timeout_ms": 10000, 00:18:07.839 "arbitration_burst": 0, 00:18:07.839 "low_priority_weight": 0, 00:18:07.839 "medium_priority_weight": 0, 00:18:07.839 "high_priority_weight": 0, 00:18:07.839 "nvme_adminq_poll_period_us": 10000, 00:18:07.839 "nvme_ioq_poll_period_us": 0, 00:18:07.839 "io_queue_requests": 512, 00:18:07.839 "delay_cmd_submit": true, 00:18:07.839 "transport_retry_count": 4, 00:18:07.839 "bdev_retry_count": 3, 00:18:07.839 "transport_ack_timeout": 0, 00:18:07.839 "ctrlr_loss_timeout_sec": 0, 00:18:07.839 "reconnect_delay_sec": 0, 00:18:07.839 "fast_io_fail_timeout_sec": 0, 00:18:07.839 "disable_auto_failback": false, 00:18:07.839 "generate_uuids": false, 00:18:07.839 "transport_tos": 0, 00:18:07.839 "nvme_error_stat": false, 00:18:07.839 "rdma_srq_size": 0, 00:18:07.839 "io_path_stat": false, 00:18:07.839 "allow_accel_sequence": false, 00:18:07.839 "rdma_max_cq_size": 0, 00:18:07.839 "rdma_cm_event_timeout_ms": 0, 00:18:07.839 "dhchap_digests": [ 00:18:07.839 "sha256", 00:18:07.839 "sha384", 00:18:07.839 "sha512" 00:18:07.839 ], 00:18:07.839 "dhchap_dhgroups": [ 00:18:07.839 "null", 00:18:07.839 "ffdhe2048", 00:18:07.839 "ffdhe3072", 00:18:07.839 "ffdhe4096", 00:18:07.839 "ffdhe6144", 00:18:07.839 "ffdhe8192" 00:18:07.839 ] 00:18:07.839 } 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "method": "bdev_nvme_attach_controller", 00:18:07.839 "params": { 00:18:07.839 "name": "TLSTEST", 00:18:07.839 "trtype": "TCP", 00:18:07.839 "adrfam": "IPv4", 00:18:07.839 "traddr": "10.0.0.2", 00:18:07.839 "trsvcid": "4420", 00:18:07.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.839 "prchk_reftag": false, 00:18:07.839 "prchk_guard": false, 00:18:07.839 "ctrlr_loss_timeout_sec": 0, 00:18:07.839 "reconnect_delay_sec": 0, 00:18:07.839 "fast_io_fail_timeout_sec": 0, 00:18:07.839 "psk": "/tmp/tmp.XJGziDUVsy", 00:18:07.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.839 "hdgst": false, 00:18:07.839 "ddgst": false 00:18:07.839 } 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "method": "bdev_nvme_set_hotplug", 00:18:07.839 "params": { 00:18:07.839 "period_us": 100000, 00:18:07.839 "enable": false 00:18:07.839 } 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "method": "bdev_wait_for_examine" 00:18:07.839 } 00:18:07.839 ] 00:18:07.839 }, 00:18:07.839 { 00:18:07.839 "subsystem": "nbd", 00:18:07.839 "config": [] 00:18:07.839 } 00:18:07.839 ] 00:18:07.839 }' 00:18:07.839 14:37:16 -- target/tls.sh@199 -- # killprocess 70150 00:18:07.839 14:37:16 -- common/autotest_common.sh@936 -- # '[' -z 70150 ']' 00:18:07.839 14:37:16 -- common/autotest_common.sh@940 -- # kill -0 70150 00:18:07.839 14:37:16 -- common/autotest_common.sh@941 -- # uname 00:18:07.839 14:37:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.839 14:37:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70150 00:18:07.839 killing process with pid 70150 00:18:07.839 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.839 00:18:07.839 Latency(us) 00:18:07.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.839 =================================================================================================================== 00:18:07.839 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:07.840 14:37:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:07.840 14:37:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:07.840 14:37:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70150' 00:18:07.840 14:37:16 -- common/autotest_common.sh@955 -- # kill 70150 00:18:07.840 [2024-04-17 14:37:16.241687] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:07.840 14:37:16 -- common/autotest_common.sh@960 -- # wait 70150 00:18:07.840 14:37:16 -- target/tls.sh@200 -- # killprocess 70101 00:18:07.840 14:37:16 -- common/autotest_common.sh@936 -- # '[' -z 70101 ']' 00:18:07.840 14:37:16 -- common/autotest_common.sh@940 -- # kill -0 70101 00:18:07.840 14:37:16 -- common/autotest_common.sh@941 -- # uname 00:18:07.840 14:37:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.840 14:37:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70101 00:18:08.099 killing process with pid 70101 00:18:08.099 14:37:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:08.099 14:37:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:08.099 14:37:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70101' 00:18:08.099 14:37:16 -- common/autotest_common.sh@955 -- # kill 70101 00:18:08.099 [2024-04-17 14:37:16.452041] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:08.099 14:37:16 -- common/autotest_common.sh@960 -- # wait 70101 00:18:08.099 14:37:16 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:08.099 14:37:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:08.099 14:37:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:08.099 14:37:16 -- common/autotest_common.sh@10 -- # set +x 00:18:08.099 14:37:16 -- target/tls.sh@203 -- # echo '{ 00:18:08.099 "subsystems": [ 00:18:08.099 { 00:18:08.099 "subsystem": "keyring", 00:18:08.099 "config": [] 00:18:08.099 }, 00:18:08.099 { 00:18:08.099 "subsystem": "iobuf", 00:18:08.099 "config": [ 00:18:08.099 { 00:18:08.099 "method": "iobuf_set_options", 00:18:08.099 "params": { 00:18:08.099 "small_pool_count": 8192, 00:18:08.100 "large_pool_count": 1024, 00:18:08.100 "small_bufsize": 8192, 00:18:08.100 "large_bufsize": 135168 00:18:08.100 } 00:18:08.100 } 00:18:08.100 ] 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "subsystem": "sock", 00:18:08.100 "config": [ 00:18:08.100 { 00:18:08.100 "method": "sock_impl_set_options", 00:18:08.100 "params": { 00:18:08.100 "impl_name": "uring", 00:18:08.100 "recv_buf_size": 2097152, 00:18:08.100 "send_buf_size": 2097152, 00:18:08.100 "enable_recv_pipe": true, 00:18:08.100 "enable_quickack": false, 00:18:08.100 "enable_placement_id": 0, 00:18:08.100 "enable_zerocopy_send_server": false, 00:18:08.100 "enable_zerocopy_send_client": false, 00:18:08.100 "zerocopy_threshold": 0, 00:18:08.100 "tls_version": 0, 00:18:08.100 "enable_ktls": false 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "sock_impl_set_options", 00:18:08.100 "params": { 00:18:08.100 "impl_name": "posix", 00:18:08.100 "recv_buf_size": 2097152, 00:18:08.100 "send_buf_size": 2097152, 00:18:08.100 "enable_recv_pipe": true, 00:18:08.100 "enable_quickack": false, 00:18:08.100 "enable_placement_id": 0, 00:18:08.100 "enable_zerocopy_send_server": true, 00:18:08.100 "enable_zerocopy_send_client": false, 00:18:08.100 "zerocopy_threshold": 0, 00:18:08.100 "tls_version": 0, 00:18:08.100 "enable_ktls": false 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "sock_impl_set_options", 00:18:08.100 "params": { 00:18:08.100 "impl_name": "ssl", 00:18:08.100 "recv_buf_size": 4096, 00:18:08.100 "send_buf_size": 4096, 00:18:08.100 "enable_recv_pipe": true, 00:18:08.100 "enable_quickack": false, 00:18:08.100 "enable_placement_id": 0, 00:18:08.100 "enable_zerocopy_send_server": true, 00:18:08.100 "enable_zerocopy_send_client": false, 00:18:08.100 "zerocopy_threshold": 0, 00:18:08.100 "tls_version": 0, 00:18:08.100 "enable_ktls": false 00:18:08.100 } 00:18:08.100 } 00:18:08.100 ] 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "subsystem": "vmd", 00:18:08.100 "config": [] 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "subsystem": "accel", 00:18:08.100 "config": [ 00:18:08.100 { 00:18:08.100 "method": "accel_set_options", 00:18:08.100 "params": { 00:18:08.100 "small_cache_size": 128, 00:18:08.100 "large_cache_size": 16, 00:18:08.100 "task_count": 2048, 00:18:08.100 "sequence_count": 2048, 00:18:08.100 "buf_count": 2048 00:18:08.100 } 00:18:08.100 } 00:18:08.100 ] 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "subsystem": "bdev", 00:18:08.100 "config": [ 00:18:08.100 { 00:18:08.100 "method": "bdev_set_options", 00:18:08.100 "params": { 00:18:08.100 "bdev_io_pool_size": 65535, 00:18:08.100 "bdev_io_cache_size": 256, 00:18:08.100 "bdev_auto_examine": true, 00:18:08.100 "iobuf_small_cache_size": 128, 00:18:08.100 "iobuf_large_cache_size": 16 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "bdev_raid_set_options", 00:18:08.100 "params": { 00:18:08.100 "process_window_size_kb": 1024 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "bdev_iscsi_set_options", 00:18:08.100 "params": { 00:18:08.100 "timeout_sec": 30 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "bdev_nvme_set_options", 00:18:08.100 "params": { 00:18:08.100 "action_on_timeout": "none", 00:18:08.100 "timeout_us": 0, 00:18:08.100 "timeout_admin_us": 0, 00:18:08.100 "keep_alive_timeout_ms": 10000, 00:18:08.100 "arbitration_burst": 0, 00:18:08.100 "low_priority_weight": 0, 00:18:08.100 "medium_priority_weight": 0, 00:18:08.100 "high_priority_weight": 0, 00:18:08.100 "nvme_adminq_poll_period_us": 10000, 00:18:08.100 "nvme_ioq_poll_period_us": 0, 00:18:08.100 "io_queue_requests": 0, 00:18:08.100 "delay_cmd_submit": true, 00:18:08.100 "transport_retry_count": 4, 00:18:08.100 "bdev_retry_count": 3, 00:18:08.100 "transport_ack_timeout": 0, 00:18:08.100 "ctrlr_loss_timeout_sec": 0, 00:18:08.100 "reconnect_delay_sec": 0, 00:18:08.100 "fast_io_fail_timeout_sec": 0, 00:18:08.100 "disable_auto_failback": false, 00:18:08.100 "generate_uuids": false, 00:18:08.100 "transport_tos": 0, 00:18:08.100 "nvme_error_stat": false, 00:18:08.100 "rdma_srq_size": 0, 00:18:08.100 "io_path_stat": false, 00:18:08.100 "allow_accel_sequence": false, 00:18:08.100 "rdma_max_cq_size": 0, 00:18:08.100 "rdma_cm_event_timeout_ms": 0, 00:18:08.100 "dhchap_digests": [ 00:18:08.100 "sha256", 00:18:08.100 "sha384", 00:18:08.100 "sha512" 00:18:08.100 ], 00:18:08.100 "dhchap_dhgroups": [ 00:18:08.100 "null", 00:18:08.100 "ffdhe2048", 00:18:08.100 "ffdhe3072", 00:18:08.100 "ffdhe4096", 00:18:08.100 "ffdhe6144", 00:18:08.100 "ffdhe8192" 00:18:08.100 ] 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "bdev_nvme_set_hotplug", 00:18:08.100 "params": { 00:18:08.100 "period_us": 100000, 00:18:08.100 "enable": false 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "bdev_malloc_create", 00:18:08.100 "params": { 00:18:08.100 "name": "malloc0", 00:18:08.100 "num_blocks": 8192, 00:18:08.100 "block_size": 4096, 00:18:08.100 "physical_block_size": 4096, 00:18:08.100 "uuid": "e438695b-0e46-4048-905a-7629cb527dae", 00:18:08.100 "optimal_io_boundary": 0 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "bdev_wait_for_examine" 00:18:08.100 } 00:18:08.100 ] 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "subsystem": "nbd", 00:18:08.100 "config": [] 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "subsystem": "scheduler", 00:18:08.100 "config": [ 00:18:08.100 { 00:18:08.100 "method": "framework_set_scheduler", 00:18:08.100 "params": { 00:18:08.100 "name": "static" 00:18:08.100 } 00:18:08.100 } 00:18:08.100 ] 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "subsystem": "nvmf", 00:18:08.100 "config": [ 00:18:08.100 { 00:18:08.100 "method": "nvmf_set_config", 00:18:08.100 "params": { 00:18:08.100 "discovery_filter": "match_any", 00:18:08.100 "admin_cmd_passthru": { 00:18:08.100 "identify_ctrlr": false 00:18:08.100 } 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "nvmf_set_max_subsystems", 00:18:08.100 "params": { 00:18:08.100 "max_subsystems": 1024 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "nvmf_set_crdt", 00:18:08.100 "params": { 00:18:08.100 "crdt1": 0, 00:18:08.100 "crdt2": 0, 00:18:08.100 "crdt3": 0 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.100 "method": "nvmf_create_transport", 00:18:08.100 "params": { 00:18:08.100 "trtype": "TCP", 00:18:08.100 "max_queue_depth": 128, 00:18:08.100 "max_io_qpairs_per_ctrlr": 127, 00:18:08.100 "in_capsule_data_size": 4096, 00:18:08.100 "max_io_size": 131072, 00:18:08.100 "io_unit_size": 131072, 00:18:08.100 "max_aq_depth": 128, 00:18:08.100 "num_shared_buffers": 511, 00:18:08.100 "buf_cache_size": 4294967295, 00:18:08.100 "dif_insert_or_strip": false, 00:18:08.100 "zcopy": false, 00:18:08.100 "c2h_success": false, 00:18:08.100 "sock_priority": 0, 00:18:08.100 "abort_timeout_sec": 1, 00:18:08.100 "ack_timeout": 0 00:18:08.100 } 00:18:08.100 }, 00:18:08.100 { 00:18:08.101 "method": "nvmf_create_subsystem", 00:18:08.101 "params": { 00:18:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.101 "allow_any_host": false, 00:18:08.101 "serial_number": "SPDK00000000000001", 00:18:08.101 "model_number": "SPDK bdev Controller", 00:18:08.101 "max_namespaces": 10, 00:18:08.101 "min_cntlid": 1, 00:18:08.101 "max_cntlid": 65519, 00:18:08.101 "ana_reporting": false 00:18:08.101 } 00:18:08.101 }, 00:18:08.101 { 00:18:08.101 "method": "nvmf_subsystem_add_host", 00:18:08.101 "params": { 00:18:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.101 "host": "nqn.2016-06.io.spdk:host1", 00:18:08.101 "psk": "/tmp/tmp.XJGziDUVsy" 00:18:08.101 } 00:18:08.101 }, 00:18:08.101 { 00:18:08.101 "method": "nvmf_subsystem_add_ns", 00:18:08.101 "params": { 00:18:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.101 "namespace": { 00:18:08.101 "nsid": 1, 00:18:08.101 "bdev_name": "malloc0", 00:18:08.101 "nguid": "E438695B0E464048905A7629CB527DAE", 00:18:08.101 "uuid": "e438695b-0e46-4048-905a-7629cb527dae", 00:18:08.101 "no_auto_visible": false 00:18:08.101 } 00:18:08.101 } 00:18:08.101 }, 00:18:08.101 { 00:18:08.101 "method": "nvmf_subsystem_add_listener", 00:18:08.101 "params": { 00:18:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.101 "listen_address": { 00:18:08.101 "trtype": "TCP", 00:18:08.101 "adrfam": "IPv4", 00:18:08.101 "traddr": "10.0.0.2", 00:18:08.101 "trsvcid": "4420" 00:18:08.101 }, 00:18:08.101 "secure_channel": true 00:18:08.101 } 00:18:08.101 } 00:18:08.101 ] 00:18:08.101 } 00:18:08.101 ] 00:18:08.101 }' 00:18:08.101 14:37:16 -- nvmf/common.sh@470 -- # nvmfpid=70204 00:18:08.101 14:37:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:08.101 14:37:16 -- nvmf/common.sh@471 -- # waitforlisten 70204 00:18:08.101 14:37:16 -- common/autotest_common.sh@817 -- # '[' -z 70204 ']' 00:18:08.101 14:37:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.101 14:37:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.101 14:37:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.101 14:37:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.101 14:37:16 -- common/autotest_common.sh@10 -- # set +x 00:18:08.101 [2024-04-17 14:37:16.699071] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:08.101 [2024-04-17 14:37:16.699171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.360 [2024-04-17 14:37:16.838259] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.360 [2024-04-17 14:37:16.895763] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.360 [2024-04-17 14:37:16.895825] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.360 [2024-04-17 14:37:16.895837] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.360 [2024-04-17 14:37:16.895846] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.360 [2024-04-17 14:37:16.895853] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.361 [2024-04-17 14:37:16.895944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.619 [2024-04-17 14:37:17.079316] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.619 [2024-04-17 14:37:17.095253] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:08.619 [2024-04-17 14:37:17.111247] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:08.619 [2024-04-17 14:37:17.111442] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.192 14:37:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.192 14:37:17 -- common/autotest_common.sh@850 -- # return 0 00:18:09.192 14:37:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:09.192 14:37:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:09.192 14:37:17 -- common/autotest_common.sh@10 -- # set +x 00:18:09.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.192 14:37:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.192 14:37:17 -- target/tls.sh@207 -- # bdevperf_pid=70235 00:18:09.192 14:37:17 -- target/tls.sh@208 -- # waitforlisten 70235 /var/tmp/bdevperf.sock 00:18:09.192 14:37:17 -- common/autotest_common.sh@817 -- # '[' -z 70235 ']' 00:18:09.192 14:37:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.192 14:37:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.192 14:37:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.192 14:37:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.192 14:37:17 -- common/autotest_common.sh@10 -- # set +x 00:18:09.193 14:37:17 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:09.193 14:37:17 -- target/tls.sh@204 -- # echo '{ 00:18:09.193 "subsystems": [ 00:18:09.193 { 00:18:09.193 "subsystem": "keyring", 00:18:09.193 "config": [] 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "subsystem": "iobuf", 00:18:09.193 "config": [ 00:18:09.193 { 00:18:09.193 "method": "iobuf_set_options", 00:18:09.193 "params": { 00:18:09.193 "small_pool_count": 8192, 00:18:09.193 "large_pool_count": 1024, 00:18:09.193 "small_bufsize": 8192, 00:18:09.193 "large_bufsize": 135168 00:18:09.193 } 00:18:09.193 } 00:18:09.193 ] 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "subsystem": "sock", 00:18:09.193 "config": [ 00:18:09.193 { 00:18:09.193 "method": "sock_impl_set_options", 00:18:09.193 "params": { 00:18:09.193 "impl_name": "uring", 00:18:09.193 "recv_buf_size": 2097152, 00:18:09.193 "send_buf_size": 2097152, 00:18:09.193 "enable_recv_pipe": true, 00:18:09.193 "enable_quickack": false, 00:18:09.193 "enable_placement_id": 0, 00:18:09.193 "enable_zerocopy_send_server": false, 00:18:09.193 "enable_zerocopy_send_client": false, 00:18:09.193 "zerocopy_threshold": 0, 00:18:09.193 "tls_version": 0, 00:18:09.193 "enable_ktls": false 00:18:09.193 } 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "method": "sock_impl_set_options", 00:18:09.193 "params": { 00:18:09.193 "impl_name": "posix", 00:18:09.193 "recv_buf_size": 2097152, 00:18:09.193 "send_buf_size": 2097152, 00:18:09.193 "enable_recv_pipe": true, 00:18:09.193 "enable_quickack": false, 00:18:09.193 "enable_placement_id": 0, 00:18:09.193 "enable_zerocopy_send_server": true, 00:18:09.193 "enable_zerocopy_send_client": false, 00:18:09.193 "zerocopy_threshold": 0, 00:18:09.193 "tls_version": 0, 00:18:09.193 "enable_ktls": false 00:18:09.193 } 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "method": "sock_impl_set_options", 00:18:09.193 "params": { 00:18:09.193 "impl_name": "ssl", 00:18:09.193 "recv_buf_size": 4096, 00:18:09.193 "send_buf_size": 4096, 00:18:09.193 "enable_recv_pipe": true, 00:18:09.193 "enable_quickack": false, 00:18:09.193 "enable_placement_id": 0, 00:18:09.193 "enable_zerocopy_send_server": true, 00:18:09.193 "enable_zerocopy_send_client": false, 00:18:09.193 "zerocopy_threshold": 0, 00:18:09.193 "tls_version": 0, 00:18:09.193 "enable_ktls": false 00:18:09.193 } 00:18:09.193 } 00:18:09.193 ] 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "subsystem": "vmd", 00:18:09.193 "config": [] 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "subsystem": "accel", 00:18:09.193 "config": [ 00:18:09.193 { 00:18:09.193 "method": "accel_set_options", 00:18:09.193 "params": { 00:18:09.193 "small_cache_size": 128, 00:18:09.193 "large_cache_size": 16, 00:18:09.193 "task_count": 2048, 00:18:09.193 "sequence_count": 2048, 00:18:09.193 "buf_count": 2048 00:18:09.193 } 00:18:09.193 } 00:18:09.193 ] 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "subsystem": "bdev", 00:18:09.193 "config": [ 00:18:09.193 { 00:18:09.193 "method": "bdev_set_options", 00:18:09.193 "params": { 00:18:09.193 "bdev_io_pool_size": 65535, 00:18:09.193 "bdev_io_cache_size": 256, 00:18:09.193 "bdev_auto_examine": true, 00:18:09.193 "iobuf_small_cache_size": 128, 00:18:09.193 "iobuf_large_cache_size": 16 00:18:09.193 } 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "method": "bdev_raid_set_options", 00:18:09.193 "params": { 00:18:09.193 "process_window_size_kb": 1024 00:18:09.193 } 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "method": "bdev_iscsi_set_options", 00:18:09.193 "params": { 00:18:09.193 "timeout_sec": 30 00:18:09.193 } 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "method": "bdev_nvme_set_options", 00:18:09.193 "params": { 00:18:09.193 "action_on_timeout": "none", 00:18:09.193 "timeout_us": 0, 00:18:09.193 "timeout_admin_us": 0, 00:18:09.193 "keep_alive_timeout_ms": 10000, 00:18:09.193 "arbitration_burst": 0, 00:18:09.193 "low_priority_weight": 0, 00:18:09.193 "medium_priority_weight": 0, 00:18:09.193 "high_priority_weight": 0, 00:18:09.193 "nvme_adminq_poll_period_us": 10000, 00:18:09.193 "nvme_ioq_poll_period_us": 0, 00:18:09.193 "io_queue_requests": 512, 00:18:09.193 "delay_cmd_submit": true, 00:18:09.193 "transport_retry_count": 4, 00:18:09.193 "bdev_retry_count": 3, 00:18:09.193 "transport_ack_timeout": 0, 00:18:09.193 "ctrlr_loss_timeout_sec": 0, 00:18:09.193 "reconnect_delay_sec": 0, 00:18:09.193 "fast_io_fail_timeout_sec": 0, 00:18:09.193 "disable_auto_failback": false, 00:18:09.193 "generate_uuids": false, 00:18:09.193 "transport_tos": 0, 00:18:09.193 "nvme_error_stat": false, 00:18:09.193 "rdma_srq_size": 0, 00:18:09.193 "io_path_stat": false, 00:18:09.193 "allow_accel_sequence": false, 00:18:09.193 "rdma_max_cq_size": 0, 00:18:09.193 "rdma_cm_event_timeout_ms": 0, 00:18:09.193 "dhchap_digests": [ 00:18:09.193 "sha256", 00:18:09.193 "sha384", 00:18:09.193 "sha512" 00:18:09.193 ], 00:18:09.193 "dhchap_dhgroups": [ 00:18:09.193 "null", 00:18:09.193 "ffdhe2048", 00:18:09.193 "ffdhe3072", 00:18:09.193 "ffdhe4096", 00:18:09.193 "ffdhe6144", 00:18:09.193 "ffdhe8192" 00:18:09.193 ] 00:18:09.193 } 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "method": "bdev_nvme_attach_controller", 00:18:09.193 "params": { 00:18:09.193 "name": "TLSTEST", 00:18:09.193 "trtype": "TCP", 00:18:09.193 "adrfam": "IPv4", 00:18:09.193 "traddr": "10.0.0.2", 00:18:09.193 "trsvcid": "4420", 00:18:09.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.193 "prchk_reftag": false, 00:18:09.193 "prchk_guard": false, 00:18:09.193 "ctrlr_loss_timeout_sec": 0, 00:18:09.193 "reconnect_delay_sec": 0, 00:18:09.193 "fast_io_fail_timeout_sec": 0, 00:18:09.193 "psk": "/tmp/tmp.XJGziDUVsy", 00:18:09.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.193 "hdgst": false, 00:18:09.193 "ddgst": false 00:18:09.193 } 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "method": "bdev_nvme_set_hotplug", 00:18:09.193 "params": { 00:18:09.193 "period_us": 100000, 00:18:09.193 "enable": false 00:18:09.193 } 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "method": "bdev_wait_for_examine" 00:18:09.193 } 00:18:09.193 ] 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "subsystem": "nbd", 00:18:09.193 "config": [] 00:18:09.193 } 00:18:09.193 ] 00:18:09.193 }' 00:18:09.193 [2024-04-17 14:37:17.688747] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:09.193 [2024-04-17 14:37:17.688839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70235 ] 00:18:09.452 [2024-04-17 14:37:17.826586] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.452 [2024-04-17 14:37:17.895011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.452 [2024-04-17 14:37:18.027031] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.452 [2024-04-17 14:37:18.027422] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:10.386 14:37:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:10.386 14:37:18 -- common/autotest_common.sh@850 -- # return 0 00:18:10.386 14:37:18 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:10.386 Running I/O for 10 seconds... 00:18:20.361 00:18:20.361 Latency(us) 00:18:20.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.361 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:20.361 Verification LBA range: start 0x0 length 0x2000 00:18:20.361 TLSTESTn1 : 10.02 3838.68 14.99 0.00 0.00 33281.40 6464.23 29193.31 00:18:20.361 =================================================================================================================== 00:18:20.361 Total : 3838.68 14.99 0.00 0.00 33281.40 6464.23 29193.31 00:18:20.361 0 00:18:20.361 14:37:28 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:20.361 14:37:28 -- target/tls.sh@214 -- # killprocess 70235 00:18:20.361 14:37:28 -- common/autotest_common.sh@936 -- # '[' -z 70235 ']' 00:18:20.361 14:37:28 -- common/autotest_common.sh@940 -- # kill -0 70235 00:18:20.361 14:37:28 -- common/autotest_common.sh@941 -- # uname 00:18:20.361 14:37:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:20.361 14:37:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70235 00:18:20.361 killing process with pid 70235 00:18:20.361 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.361 00:18:20.361 Latency(us) 00:18:20.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.361 =================================================================================================================== 00:18:20.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.361 14:37:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:20.361 14:37:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:20.361 14:37:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70235' 00:18:20.361 14:37:28 -- common/autotest_common.sh@955 -- # kill 70235 00:18:20.361 [2024-04-17 14:37:28.846431] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:20.361 14:37:28 -- common/autotest_common.sh@960 -- # wait 70235 00:18:20.619 14:37:29 -- target/tls.sh@215 -- # killprocess 70204 00:18:20.619 14:37:29 -- common/autotest_common.sh@936 -- # '[' -z 70204 ']' 00:18:20.619 14:37:29 -- common/autotest_common.sh@940 -- # kill -0 70204 00:18:20.619 14:37:29 -- common/autotest_common.sh@941 -- # uname 00:18:20.619 14:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:20.619 14:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70204 00:18:20.619 killing process with pid 70204 00:18:20.619 14:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:20.619 14:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:20.619 14:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70204' 00:18:20.619 14:37:29 -- common/autotest_common.sh@955 -- # kill 70204 00:18:20.619 [2024-04-17 14:37:29.062728] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:20.619 14:37:29 -- common/autotest_common.sh@960 -- # wait 70204 00:18:20.877 14:37:29 -- target/tls.sh@218 -- # nvmfappstart 00:18:20.877 14:37:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:20.877 14:37:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:20.877 14:37:29 -- common/autotest_common.sh@10 -- # set +x 00:18:20.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.877 14:37:29 -- nvmf/common.sh@470 -- # nvmfpid=70371 00:18:20.877 14:37:29 -- nvmf/common.sh@471 -- # waitforlisten 70371 00:18:20.877 14:37:29 -- common/autotest_common.sh@817 -- # '[' -z 70371 ']' 00:18:20.877 14:37:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:20.877 14:37:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.877 14:37:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.877 14:37:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.877 14:37:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.877 14:37:29 -- common/autotest_common.sh@10 -- # set +x 00:18:20.877 [2024-04-17 14:37:29.316037] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:20.877 [2024-04-17 14:37:29.316158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.877 [2024-04-17 14:37:29.447707] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.141 [2024-04-17 14:37:29.506174] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.141 [2024-04-17 14:37:29.506239] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.141 [2024-04-17 14:37:29.506251] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.141 [2024-04-17 14:37:29.506260] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.141 [2024-04-17 14:37:29.506267] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.141 [2024-04-17 14:37:29.506301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.094 14:37:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.094 14:37:30 -- common/autotest_common.sh@850 -- # return 0 00:18:22.094 14:37:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:22.094 14:37:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.094 14:37:30 -- common/autotest_common.sh@10 -- # set +x 00:18:22.094 14:37:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.094 14:37:30 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.XJGziDUVsy 00:18:22.094 14:37:30 -- target/tls.sh@49 -- # local key=/tmp/tmp.XJGziDUVsy 00:18:22.094 14:37:30 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.094 [2024-04-17 14:37:30.642407] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.094 14:37:30 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.662 14:37:30 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.662 [2024-04-17 14:37:31.214522] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.662 [2024-04-17 14:37:31.214749] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.662 14:37:31 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.228 malloc0 00:18:23.228 14:37:31 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.487 14:37:31 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XJGziDUVsy 00:18:23.746 [2024-04-17 14:37:32.125437] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:23.746 14:37:32 -- target/tls.sh@222 -- # bdevperf_pid=70426 00:18:23.746 14:37:32 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:23.746 14:37:32 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.746 14:37:32 -- target/tls.sh@225 -- # waitforlisten 70426 /var/tmp/bdevperf.sock 00:18:23.746 14:37:32 -- common/autotest_common.sh@817 -- # '[' -z 70426 ']' 00:18:23.746 14:37:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.746 14:37:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.746 14:37:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.746 14:37:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.746 14:37:32 -- common/autotest_common.sh@10 -- # set +x 00:18:23.746 [2024-04-17 14:37:32.198001] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:23.746 [2024-04-17 14:37:32.198105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70426 ] 00:18:24.004 [2024-04-17 14:37:32.356211] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.004 [2024-04-17 14:37:32.420938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.004 14:37:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.004 14:37:32 -- common/autotest_common.sh@850 -- # return 0 00:18:24.004 14:37:32 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XJGziDUVsy 00:18:24.265 14:37:32 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:24.524 [2024-04-17 14:37:33.040366] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.524 nvme0n1 00:18:24.782 14:37:33 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:24.782 Running I/O for 1 seconds... 00:18:25.717 00:18:25.718 Latency(us) 00:18:25.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.718 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:25.718 Verification LBA range: start 0x0 length 0x2000 00:18:25.718 nvme0n1 : 1.02 3777.49 14.76 0.00 0.00 33497.37 7536.64 32887.16 00:18:25.718 =================================================================================================================== 00:18:25.718 Total : 3777.49 14.76 0.00 0.00 33497.37 7536.64 32887.16 00:18:25.718 0 00:18:25.718 14:37:34 -- target/tls.sh@234 -- # killprocess 70426 00:18:25.718 14:37:34 -- common/autotest_common.sh@936 -- # '[' -z 70426 ']' 00:18:25.718 14:37:34 -- common/autotest_common.sh@940 -- # kill -0 70426 00:18:25.718 14:37:34 -- common/autotest_common.sh@941 -- # uname 00:18:25.718 14:37:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.718 14:37:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70426 00:18:25.718 killing process with pid 70426 00:18:25.718 Received shutdown signal, test time was about 1.000000 seconds 00:18:25.718 00:18:25.718 Latency(us) 00:18:25.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.718 =================================================================================================================== 00:18:25.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.718 14:37:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:25.718 14:37:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:25.718 14:37:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70426' 00:18:25.718 14:37:34 -- common/autotest_common.sh@955 -- # kill 70426 00:18:25.718 14:37:34 -- common/autotest_common.sh@960 -- # wait 70426 00:18:25.976 14:37:34 -- target/tls.sh@235 -- # killprocess 70371 00:18:25.976 14:37:34 -- common/autotest_common.sh@936 -- # '[' -z 70371 ']' 00:18:25.976 14:37:34 -- common/autotest_common.sh@940 -- # kill -0 70371 00:18:25.976 14:37:34 -- common/autotest_common.sh@941 -- # uname 00:18:25.976 14:37:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.976 14:37:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70371 00:18:25.976 killing process with pid 70371 00:18:25.976 14:37:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:25.976 14:37:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:25.976 14:37:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70371' 00:18:25.976 14:37:34 -- common/autotest_common.sh@955 -- # kill 70371 00:18:25.976 [2024-04-17 14:37:34.538026] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:25.976 14:37:34 -- common/autotest_common.sh@960 -- # wait 70371 00:18:26.235 14:37:34 -- target/tls.sh@238 -- # nvmfappstart 00:18:26.235 14:37:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:26.235 14:37:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:26.235 14:37:34 -- common/autotest_common.sh@10 -- # set +x 00:18:26.235 14:37:34 -- nvmf/common.sh@470 -- # nvmfpid=70470 00:18:26.235 14:37:34 -- nvmf/common.sh@471 -- # waitforlisten 70470 00:18:26.235 14:37:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:26.235 14:37:34 -- common/autotest_common.sh@817 -- # '[' -z 70470 ']' 00:18:26.235 14:37:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.235 14:37:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:26.235 14:37:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.235 14:37:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:26.235 14:37:34 -- common/autotest_common.sh@10 -- # set +x 00:18:26.235 [2024-04-17 14:37:34.809389] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:26.235 [2024-04-17 14:37:34.809504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.508 [2024-04-17 14:37:34.950739] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.508 [2024-04-17 14:37:35.022489] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.508 [2024-04-17 14:37:35.022555] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.508 [2024-04-17 14:37:35.022571] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.508 [2024-04-17 14:37:35.022581] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.508 [2024-04-17 14:37:35.022589] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.508 [2024-04-17 14:37:35.022619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.466 14:37:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:27.466 14:37:35 -- common/autotest_common.sh@850 -- # return 0 00:18:27.466 14:37:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:27.466 14:37:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:27.466 14:37:35 -- common/autotest_common.sh@10 -- # set +x 00:18:27.466 14:37:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.466 14:37:35 -- target/tls.sh@239 -- # rpc_cmd 00:18:27.466 14:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.466 14:37:35 -- common/autotest_common.sh@10 -- # set +x 00:18:27.466 [2024-04-17 14:37:35.868048] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.466 malloc0 00:18:27.466 [2024-04-17 14:37:35.895778] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.466 [2024-04-17 14:37:35.896168] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.466 14:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.466 14:37:35 -- target/tls.sh@252 -- # bdevperf_pid=70508 00:18:27.466 14:37:35 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:27.466 14:37:35 -- target/tls.sh@254 -- # waitforlisten 70508 /var/tmp/bdevperf.sock 00:18:27.466 14:37:35 -- common/autotest_common.sh@817 -- # '[' -z 70508 ']' 00:18:27.466 14:37:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.466 14:37:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:27.466 14:37:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.466 14:37:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:27.466 14:37:35 -- common/autotest_common.sh@10 -- # set +x 00:18:27.466 [2024-04-17 14:37:35.997233] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:27.466 [2024-04-17 14:37:35.997742] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70508 ] 00:18:27.724 [2024-04-17 14:37:36.144262] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.724 [2024-04-17 14:37:36.213069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.663 14:37:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.663 14:37:37 -- common/autotest_common.sh@850 -- # return 0 00:18:28.663 14:37:37 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XJGziDUVsy 00:18:28.921 14:37:37 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:29.179 [2024-04-17 14:37:37.698566] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.179 nvme0n1 00:18:29.436 14:37:37 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.436 Running I/O for 1 seconds... 00:18:30.370 00:18:30.370 Latency(us) 00:18:30.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.370 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.370 Verification LBA range: start 0x0 length 0x2000 00:18:30.370 nvme0n1 : 1.01 2968.04 11.59 0.00 0.00 42769.78 5570.56 34793.66 00:18:30.370 =================================================================================================================== 00:18:30.370 Total : 2968.04 11.59 0.00 0.00 42769.78 5570.56 34793.66 00:18:30.370 0 00:18:30.370 14:37:38 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:30.370 14:37:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.370 14:37:38 -- common/autotest_common.sh@10 -- # set +x 00:18:30.628 14:37:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.628 14:37:39 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:30.628 "subsystems": [ 00:18:30.628 { 00:18:30.628 "subsystem": "keyring", 00:18:30.628 "config": [ 00:18:30.628 { 00:18:30.628 "method": "keyring_file_add_key", 00:18:30.628 "params": { 00:18:30.628 "name": "key0", 00:18:30.628 "path": "/tmp/tmp.XJGziDUVsy" 00:18:30.628 } 00:18:30.628 } 00:18:30.628 ] 00:18:30.628 }, 00:18:30.629 { 00:18:30.629 "subsystem": "iobuf", 00:18:30.629 "config": [ 00:18:30.629 { 00:18:30.629 "method": "iobuf_set_options", 00:18:30.629 "params": { 00:18:30.629 "small_pool_count": 8192, 00:18:30.629 "large_pool_count": 1024, 00:18:30.629 "small_bufsize": 8192, 00:18:30.629 "large_bufsize": 135168 00:18:30.629 } 00:18:30.629 } 00:18:30.629 ] 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "subsystem": "sock", 00:18:30.629 "config": [ 00:18:30.629 { 00:18:30.629 "method": "sock_impl_set_options", 00:18:30.629 "params": { 00:18:30.629 "impl_name": "uring", 00:18:30.629 "recv_buf_size": 2097152, 00:18:30.629 "send_buf_size": 2097152, 00:18:30.629 "enable_recv_pipe": true, 00:18:30.629 "enable_quickack": false, 00:18:30.629 "enable_placement_id": 0, 00:18:30.629 "enable_zerocopy_send_server": false, 00:18:30.629 "enable_zerocopy_send_client": false, 00:18:30.629 "zerocopy_threshold": 0, 00:18:30.629 "tls_version": 0, 00:18:30.629 "enable_ktls": false 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "sock_impl_set_options", 00:18:30.629 "params": { 00:18:30.629 "impl_name": "posix", 00:18:30.629 "recv_buf_size": 2097152, 00:18:30.629 "send_buf_size": 2097152, 00:18:30.629 "enable_recv_pipe": true, 00:18:30.629 "enable_quickack": false, 00:18:30.629 "enable_placement_id": 0, 00:18:30.629 "enable_zerocopy_send_server": true, 00:18:30.629 "enable_zerocopy_send_client": false, 00:18:30.629 "zerocopy_threshold": 0, 00:18:30.629 "tls_version": 0, 00:18:30.629 "enable_ktls": false 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "sock_impl_set_options", 00:18:30.629 "params": { 00:18:30.629 "impl_name": "ssl", 00:18:30.629 "recv_buf_size": 4096, 00:18:30.629 "send_buf_size": 4096, 00:18:30.629 "enable_recv_pipe": true, 00:18:30.629 "enable_quickack": false, 00:18:30.629 "enable_placement_id": 0, 00:18:30.629 "enable_zerocopy_send_server": true, 00:18:30.629 "enable_zerocopy_send_client": false, 00:18:30.629 "zerocopy_threshold": 0, 00:18:30.629 "tls_version": 0, 00:18:30.629 "enable_ktls": false 00:18:30.629 } 00:18:30.629 } 00:18:30.629 ] 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "subsystem": "vmd", 00:18:30.629 "config": [] 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "subsystem": "accel", 00:18:30.629 "config": [ 00:18:30.629 { 00:18:30.629 "method": "accel_set_options", 00:18:30.629 "params": { 00:18:30.629 "small_cache_size": 128, 00:18:30.629 "large_cache_size": 16, 00:18:30.629 "task_count": 2048, 00:18:30.629 "sequence_count": 2048, 00:18:30.629 "buf_count": 2048 00:18:30.629 } 00:18:30.629 } 00:18:30.629 ] 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "subsystem": "bdev", 00:18:30.629 "config": [ 00:18:30.629 { 00:18:30.629 "method": "bdev_set_options", 00:18:30.629 "params": { 00:18:30.629 "bdev_io_pool_size": 65535, 00:18:30.629 "bdev_io_cache_size": 256, 00:18:30.629 "bdev_auto_examine": true, 00:18:30.629 "iobuf_small_cache_size": 128, 00:18:30.629 "iobuf_large_cache_size": 16 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "bdev_raid_set_options", 00:18:30.629 "params": { 00:18:30.629 "process_window_size_kb": 1024 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "bdev_iscsi_set_options", 00:18:30.629 "params": { 00:18:30.629 "timeout_sec": 30 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "bdev_nvme_set_options", 00:18:30.629 "params": { 00:18:30.629 "action_on_timeout": "none", 00:18:30.629 "timeout_us": 0, 00:18:30.629 "timeout_admin_us": 0, 00:18:30.629 "keep_alive_timeout_ms": 10000, 00:18:30.629 "arbitration_burst": 0, 00:18:30.629 "low_priority_weight": 0, 00:18:30.629 "medium_priority_weight": 0, 00:18:30.629 "high_priority_weight": 0, 00:18:30.629 "nvme_adminq_poll_period_us": 10000, 00:18:30.629 "nvme_ioq_poll_period_us": 0, 00:18:30.629 "io_queue_requests": 0, 00:18:30.629 "delay_cmd_submit": true, 00:18:30.629 "transport_retry_count": 4, 00:18:30.629 "bdev_retry_count": 3, 00:18:30.629 "transport_ack_timeout": 0, 00:18:30.629 "ctrlr_loss_timeout_sec": 0, 00:18:30.629 "reconnect_delay_sec": 0, 00:18:30.629 "fast_io_fail_timeout_sec": 0, 00:18:30.629 "disable_auto_failback": false, 00:18:30.629 "generate_uuids": false, 00:18:30.629 "transport_tos": 0, 00:18:30.629 "nvme_error_stat": false, 00:18:30.629 "rdma_srq_size": 0, 00:18:30.629 "io_path_stat": false, 00:18:30.629 "allow_accel_sequence": false, 00:18:30.629 "rdma_max_cq_size": 0, 00:18:30.629 "rdma_cm_event_timeout_ms": 0, 00:18:30.629 "dhchap_digests": [ 00:18:30.629 "sha256", 00:18:30.629 "sha384", 00:18:30.629 "sha512" 00:18:30.629 ], 00:18:30.629 "dhchap_dhgroups": [ 00:18:30.629 "null", 00:18:30.629 "ffdhe2048", 00:18:30.629 "ffdhe3072", 00:18:30.629 "ffdhe4096", 00:18:30.629 "ffdhe6144", 00:18:30.629 "ffdhe8192" 00:18:30.629 ] 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "bdev_nvme_set_hotplug", 00:18:30.629 "params": { 00:18:30.629 "period_us": 100000, 00:18:30.629 "enable": false 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "bdev_malloc_create", 00:18:30.629 "params": { 00:18:30.629 "name": "malloc0", 00:18:30.629 "num_blocks": 8192, 00:18:30.629 "block_size": 4096, 00:18:30.629 "physical_block_size": 4096, 00:18:30.629 "uuid": "d011f598-74a8-41f0-8a43-534ac2399bf4", 00:18:30.629 "optimal_io_boundary": 0 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "bdev_wait_for_examine" 00:18:30.629 } 00:18:30.629 ] 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "subsystem": "nbd", 00:18:30.629 "config": [] 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "subsystem": "scheduler", 00:18:30.629 "config": [ 00:18:30.629 { 00:18:30.629 "method": "framework_set_scheduler", 00:18:30.629 "params": { 00:18:30.629 "name": "static" 00:18:30.629 } 00:18:30.629 } 00:18:30.629 ] 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "subsystem": "nvmf", 00:18:30.629 "config": [ 00:18:30.629 { 00:18:30.629 "method": "nvmf_set_config", 00:18:30.629 "params": { 00:18:30.629 "discovery_filter": "match_any", 00:18:30.629 "admin_cmd_passthru": { 00:18:30.629 "identify_ctrlr": false 00:18:30.629 } 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "nvmf_set_max_subsystems", 00:18:30.629 "params": { 00:18:30.629 "max_subsystems": 1024 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "nvmf_set_crdt", 00:18:30.629 "params": { 00:18:30.629 "crdt1": 0, 00:18:30.629 "crdt2": 0, 00:18:30.629 "crdt3": 0 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "nvmf_create_transport", 00:18:30.629 "params": { 00:18:30.629 "trtype": "TCP", 00:18:30.629 "max_queue_depth": 128, 00:18:30.629 "max_io_qpairs_per_ctrlr": 127, 00:18:30.629 "in_capsule_data_size": 4096, 00:18:30.629 "max_io_size": 131072, 00:18:30.629 "io_unit_size": 131072, 00:18:30.629 "max_aq_depth": 128, 00:18:30.629 "num_shared_buffers": 511, 00:18:30.629 "buf_cache_size": 4294967295, 00:18:30.629 "dif_insert_or_strip": false, 00:18:30.629 "zcopy": false, 00:18:30.629 "c2h_success": false, 00:18:30.629 "sock_priority": 0, 00:18:30.629 "abort_timeout_sec": 1, 00:18:30.629 "ack_timeout": 0 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "nvmf_create_subsystem", 00:18:30.629 "params": { 00:18:30.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.629 "allow_any_host": false, 00:18:30.629 "serial_number": "00000000000000000000", 00:18:30.629 "model_number": "SPDK bdev Controller", 00:18:30.629 "max_namespaces": 32, 00:18:30.629 "min_cntlid": 1, 00:18:30.629 "max_cntlid": 65519, 00:18:30.629 "ana_reporting": false 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "nvmf_subsystem_add_host", 00:18:30.629 "params": { 00:18:30.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.629 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.629 "psk": "key0" 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "nvmf_subsystem_add_ns", 00:18:30.629 "params": { 00:18:30.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.629 "namespace": { 00:18:30.629 "nsid": 1, 00:18:30.629 "bdev_name": "malloc0", 00:18:30.629 "nguid": "D011F59874A841F08A43534AC2399BF4", 00:18:30.629 "uuid": "d011f598-74a8-41f0-8a43-534ac2399bf4", 00:18:30.629 "no_auto_visible": false 00:18:30.629 } 00:18:30.629 } 00:18:30.629 }, 00:18:30.629 { 00:18:30.629 "method": "nvmf_subsystem_add_listener", 00:18:30.629 "params": { 00:18:30.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.629 "listen_address": { 00:18:30.629 "trtype": "TCP", 00:18:30.629 "adrfam": "IPv4", 00:18:30.629 "traddr": "10.0.0.2", 00:18:30.629 "trsvcid": "4420" 00:18:30.629 }, 00:18:30.629 "secure_channel": true 00:18:30.629 } 00:18:30.629 } 00:18:30.629 ] 00:18:30.629 } 00:18:30.629 ] 00:18:30.629 }' 00:18:30.630 14:37:39 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:30.889 14:37:39 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:30.889 "subsystems": [ 00:18:30.889 { 00:18:30.889 "subsystem": "keyring", 00:18:30.889 "config": [ 00:18:30.889 { 00:18:30.889 "method": "keyring_file_add_key", 00:18:30.889 "params": { 00:18:30.889 "name": "key0", 00:18:30.889 "path": "/tmp/tmp.XJGziDUVsy" 00:18:30.889 } 00:18:30.889 } 00:18:30.889 ] 00:18:30.889 }, 00:18:30.889 { 00:18:30.889 "subsystem": "iobuf", 00:18:30.889 "config": [ 00:18:30.889 { 00:18:30.889 "method": "iobuf_set_options", 00:18:30.889 "params": { 00:18:30.889 "small_pool_count": 8192, 00:18:30.889 "large_pool_count": 1024, 00:18:30.889 "small_bufsize": 8192, 00:18:30.889 "large_bufsize": 135168 00:18:30.889 } 00:18:30.889 } 00:18:30.889 ] 00:18:30.889 }, 00:18:30.889 { 00:18:30.889 "subsystem": "sock", 00:18:30.889 "config": [ 00:18:30.890 { 00:18:30.890 "method": "sock_impl_set_options", 00:18:30.890 "params": { 00:18:30.890 "impl_name": "uring", 00:18:30.890 "recv_buf_size": 2097152, 00:18:30.890 "send_buf_size": 2097152, 00:18:30.890 "enable_recv_pipe": true, 00:18:30.890 "enable_quickack": false, 00:18:30.890 "enable_placement_id": 0, 00:18:30.890 "enable_zerocopy_send_server": false, 00:18:30.890 "enable_zerocopy_send_client": false, 00:18:30.890 "zerocopy_threshold": 0, 00:18:30.890 "tls_version": 0, 00:18:30.890 "enable_ktls": false 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "sock_impl_set_options", 00:18:30.890 "params": { 00:18:30.890 "impl_name": "posix", 00:18:30.890 "recv_buf_size": 2097152, 00:18:30.890 "send_buf_size": 2097152, 00:18:30.890 "enable_recv_pipe": true, 00:18:30.890 "enable_quickack": false, 00:18:30.890 "enable_placement_id": 0, 00:18:30.890 "enable_zerocopy_send_server": true, 00:18:30.890 "enable_zerocopy_send_client": false, 00:18:30.890 "zerocopy_threshold": 0, 00:18:30.890 "tls_version": 0, 00:18:30.890 "enable_ktls": false 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "sock_impl_set_options", 00:18:30.890 "params": { 00:18:30.890 "impl_name": "ssl", 00:18:30.890 "recv_buf_size": 4096, 00:18:30.890 "send_buf_size": 4096, 00:18:30.890 "enable_recv_pipe": true, 00:18:30.890 "enable_quickack": false, 00:18:30.890 "enable_placement_id": 0, 00:18:30.890 "enable_zerocopy_send_server": true, 00:18:30.890 "enable_zerocopy_send_client": false, 00:18:30.890 "zerocopy_threshold": 0, 00:18:30.890 "tls_version": 0, 00:18:30.890 "enable_ktls": false 00:18:30.890 } 00:18:30.890 } 00:18:30.890 ] 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "subsystem": "vmd", 00:18:30.890 "config": [] 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "subsystem": "accel", 00:18:30.890 "config": [ 00:18:30.890 { 00:18:30.890 "method": "accel_set_options", 00:18:30.890 "params": { 00:18:30.890 "small_cache_size": 128, 00:18:30.890 "large_cache_size": 16, 00:18:30.890 "task_count": 2048, 00:18:30.890 "sequence_count": 2048, 00:18:30.890 "buf_count": 2048 00:18:30.890 } 00:18:30.890 } 00:18:30.890 ] 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "subsystem": "bdev", 00:18:30.890 "config": [ 00:18:30.890 { 00:18:30.890 "method": "bdev_set_options", 00:18:30.890 "params": { 00:18:30.890 "bdev_io_pool_size": 65535, 00:18:30.890 "bdev_io_cache_size": 256, 00:18:30.890 "bdev_auto_examine": true, 00:18:30.890 "iobuf_small_cache_size": 128, 00:18:30.890 "iobuf_large_cache_size": 16 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "bdev_raid_set_options", 00:18:30.890 "params": { 00:18:30.890 "process_window_size_kb": 1024 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "bdev_iscsi_set_options", 00:18:30.890 "params": { 00:18:30.890 "timeout_sec": 30 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "bdev_nvme_set_options", 00:18:30.890 "params": { 00:18:30.890 "action_on_timeout": "none", 00:18:30.890 "timeout_us": 0, 00:18:30.890 "timeout_admin_us": 0, 00:18:30.890 "keep_alive_timeout_ms": 10000, 00:18:30.890 "arbitration_burst": 0, 00:18:30.890 "low_priority_weight": 0, 00:18:30.890 "medium_priority_weight": 0, 00:18:30.890 "high_priority_weight": 0, 00:18:30.890 "nvme_adminq_poll_period_us": 10000, 00:18:30.890 "nvme_ioq_poll_period_us": 0, 00:18:30.890 "io_queue_requests": 512, 00:18:30.890 "delay_cmd_submit": true, 00:18:30.890 "transport_retry_count": 4, 00:18:30.890 "bdev_retry_count": 3, 00:18:30.890 "transport_ack_timeout": 0, 00:18:30.890 "ctrlr_loss_timeout_sec": 0, 00:18:30.890 "reconnect_delay_sec": 0, 00:18:30.890 "fast_io_fail_timeout_sec": 0, 00:18:30.890 "disable_auto_failback": false, 00:18:30.890 "generate_uuids": false, 00:18:30.890 "transport_tos": 0, 00:18:30.890 "nvme_error_stat": false, 00:18:30.890 "rdma_srq_size": 0, 00:18:30.890 "io_path_stat": false, 00:18:30.890 "allow_accel_sequence": false, 00:18:30.890 "rdma_max_cq_size": 0, 00:18:30.890 "rdma_cm_event_timeout_ms": 0, 00:18:30.890 "dhchap_digests": [ 00:18:30.890 "sha256", 00:18:30.890 "sha384", 00:18:30.890 "sha512" 00:18:30.890 ], 00:18:30.890 "dhchap_dhgroups": [ 00:18:30.890 "null", 00:18:30.890 "ffdhe2048", 00:18:30.890 "ffdhe3072", 00:18:30.890 "ffdhe4096", 00:18:30.890 "ffdhe6144", 00:18:30.890 "ffdhe8192" 00:18:30.890 ] 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "bdev_nvme_attach_controller", 00:18:30.890 "params": { 00:18:30.890 "name": "nvme0", 00:18:30.890 "trtype": "TCP", 00:18:30.890 "adrfam": "IPv4", 00:18:30.890 "traddr": "10.0.0.2", 00:18:30.890 "trsvcid": "4420", 00:18:30.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.890 "prchk_reftag": false, 00:18:30.890 "prchk_guard": false, 00:18:30.890 "ctrlr_loss_timeout_sec": 0, 00:18:30.890 "reconnect_delay_sec": 0, 00:18:30.890 "fast_io_fail_timeout_sec": 0, 00:18:30.890 "psk": "key0", 00:18:30.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.890 "hdgst": false, 00:18:30.890 "ddgst": false 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "bdev_nvme_set_hotplug", 00:18:30.890 "params": { 00:18:30.890 "period_us": 100000, 00:18:30.890 "enable": false 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "bdev_enable_histogram", 00:18:30.890 "params": { 00:18:30.890 "name": "nvme0n1", 00:18:30.890 "enable": true 00:18:30.890 } 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "method": "bdev_wait_for_examine" 00:18:30.890 } 00:18:30.890 ] 00:18:30.890 }, 00:18:30.890 { 00:18:30.890 "subsystem": "nbd", 00:18:30.890 "config": [] 00:18:30.890 } 00:18:30.890 ] 00:18:30.890 }' 00:18:30.890 14:37:39 -- target/tls.sh@266 -- # killprocess 70508 00:18:30.890 14:37:39 -- common/autotest_common.sh@936 -- # '[' -z 70508 ']' 00:18:30.890 14:37:39 -- common/autotest_common.sh@940 -- # kill -0 70508 00:18:30.890 14:37:39 -- common/autotest_common.sh@941 -- # uname 00:18:30.890 14:37:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:30.890 14:37:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70508 00:18:30.890 killing process with pid 70508 00:18:30.890 Received shutdown signal, test time was about 1.000000 seconds 00:18:30.890 00:18:30.890 Latency(us) 00:18:30.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.890 =================================================================================================================== 00:18:30.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.890 14:37:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:30.890 14:37:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:30.890 14:37:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70508' 00:18:30.890 14:37:39 -- common/autotest_common.sh@955 -- # kill 70508 00:18:30.890 14:37:39 -- common/autotest_common.sh@960 -- # wait 70508 00:18:31.150 14:37:39 -- target/tls.sh@267 -- # killprocess 70470 00:18:31.150 14:37:39 -- common/autotest_common.sh@936 -- # '[' -z 70470 ']' 00:18:31.150 14:37:39 -- common/autotest_common.sh@940 -- # kill -0 70470 00:18:31.150 14:37:39 -- common/autotest_common.sh@941 -- # uname 00:18:31.150 14:37:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.150 14:37:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70470 00:18:31.150 killing process with pid 70470 00:18:31.150 14:37:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.150 14:37:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.150 14:37:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70470' 00:18:31.150 14:37:39 -- common/autotest_common.sh@955 -- # kill 70470 00:18:31.150 14:37:39 -- common/autotest_common.sh@960 -- # wait 70470 00:18:31.408 14:37:39 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:31.408 14:37:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:31.408 14:37:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:31.409 14:37:39 -- common/autotest_common.sh@10 -- # set +x 00:18:31.409 14:37:39 -- target/tls.sh@269 -- # echo '{ 00:18:31.409 "subsystems": [ 00:18:31.409 { 00:18:31.409 "subsystem": "keyring", 00:18:31.409 "config": [ 00:18:31.409 { 00:18:31.409 "method": "keyring_file_add_key", 00:18:31.409 "params": { 00:18:31.409 "name": "key0", 00:18:31.409 "path": "/tmp/tmp.XJGziDUVsy" 00:18:31.409 } 00:18:31.409 } 00:18:31.409 ] 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "subsystem": "iobuf", 00:18:31.409 "config": [ 00:18:31.409 { 00:18:31.409 "method": "iobuf_set_options", 00:18:31.409 "params": { 00:18:31.409 "small_pool_count": 8192, 00:18:31.409 "large_pool_count": 1024, 00:18:31.409 "small_bufsize": 8192, 00:18:31.409 "large_bufsize": 135168 00:18:31.409 } 00:18:31.409 } 00:18:31.409 ] 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "subsystem": "sock", 00:18:31.409 "config": [ 00:18:31.409 { 00:18:31.409 "method": "sock_impl_set_options", 00:18:31.409 "params": { 00:18:31.409 "impl_name": "uring", 00:18:31.409 "recv_buf_size": 2097152, 00:18:31.409 "send_buf_size": 2097152, 00:18:31.409 "enable_recv_pipe": true, 00:18:31.409 "enable_quickack": false, 00:18:31.409 "enable_placement_id": 0, 00:18:31.409 "enable_zerocopy_send_server": false, 00:18:31.409 "enable_zerocopy_send_client": false, 00:18:31.409 "zerocopy_threshold": 0, 00:18:31.409 "tls_version": 0, 00:18:31.409 "enable_ktls": false 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "sock_impl_set_options", 00:18:31.409 "params": { 00:18:31.409 "impl_name": "posix", 00:18:31.409 "recv_buf_size": 2097152, 00:18:31.409 "send_buf_size": 2097152, 00:18:31.409 "enable_recv_pipe": true, 00:18:31.409 "enable_quickack": false, 00:18:31.409 "enable_placement_id": 0, 00:18:31.409 "enable_zerocopy_send_server": true, 00:18:31.409 "enable_zerocopy_send_client": false, 00:18:31.409 "zerocopy_threshold": 0, 00:18:31.409 "tls_version": 0, 00:18:31.409 "enable_ktls": false 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "sock_impl_set_options", 00:18:31.409 "params": { 00:18:31.409 "impl_name": "ssl", 00:18:31.409 "recv_buf_size": 4096, 00:18:31.409 "send_buf_size": 4096, 00:18:31.409 "enable_recv_pipe": true, 00:18:31.409 "enable_quickack": false, 00:18:31.409 "enable_placement_id": 0, 00:18:31.409 "enable_zerocopy_send_server": true, 00:18:31.409 "enable_zerocopy_send_client": false, 00:18:31.409 "zerocopy_threshold": 0, 00:18:31.409 "tls_version": 0, 00:18:31.409 "enable_ktls": false 00:18:31.409 } 00:18:31.409 } 00:18:31.409 ] 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "subsystem": "vmd", 00:18:31.409 "config": [] 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "subsystem": "accel", 00:18:31.409 "config": [ 00:18:31.409 { 00:18:31.409 "method": "accel_set_options", 00:18:31.409 "params": { 00:18:31.409 "small_cache_size": 128, 00:18:31.409 "large_cache_size": 16, 00:18:31.409 "task_count": 2048, 00:18:31.409 "sequence_count": 2048, 00:18:31.409 "buf_count": 2048 00:18:31.409 } 00:18:31.409 } 00:18:31.409 ] 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "subsystem": "bdev", 00:18:31.409 "config": [ 00:18:31.409 { 00:18:31.409 "method": "bdev_set_options", 00:18:31.409 "params": { 00:18:31.409 "bdev_io_pool_size": 65535, 00:18:31.409 "bdev_io_cache_size": 256, 00:18:31.409 "bdev_auto_examine": true, 00:18:31.409 "iobuf_small_cache_size": 128, 00:18:31.409 "iobuf_large_cache_size": 16 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "bdev_raid_set_options", 00:18:31.409 "params": { 00:18:31.409 "process_window_size_kb": 1024 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "bdev_iscsi_set_options", 00:18:31.409 "params": { 00:18:31.409 "timeout_sec": 30 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "bdev_nvme_set_options", 00:18:31.409 "params": { 00:18:31.409 "action_on_timeout": "none", 00:18:31.409 "timeout_us": 0, 00:18:31.409 "timeout_admin_us": 0, 00:18:31.409 "keep_alive_timeout_ms": 10000, 00:18:31.409 "arbitration_burst": 0, 00:18:31.409 "low_priority_weight": 0, 00:18:31.409 "medium_priority_weight": 0, 00:18:31.409 "high_priority_weight": 0, 00:18:31.409 "nvme_adminq_poll_period_us": 10000, 00:18:31.409 "nvme_ioq_poll_period_us": 0, 00:18:31.409 "io_queue_requests": 0, 00:18:31.409 "delay_cmd_submit": true, 00:18:31.409 "transport_retry_count": 4, 00:18:31.409 "bdev_retry_count": 3, 00:18:31.409 "transport_ack_timeout": 0, 00:18:31.409 "ctrlr_loss_timeout_sec": 0, 00:18:31.409 "reconnect_delay_sec": 0, 00:18:31.409 "fast_io_fail_timeout_sec": 0, 00:18:31.409 "disable_auto_failback": false, 00:18:31.409 "generate_uuids": false, 00:18:31.409 "transport_tos": 0, 00:18:31.409 "nvme_error_stat": false, 00:18:31.409 "rdma_srq_size": 0, 00:18:31.409 "io_path_stat": false, 00:18:31.409 "allow_accel_sequence": false, 00:18:31.409 "rdma_max_cq_size": 0, 00:18:31.409 "rdma_cm_event_timeout_ms": 0, 00:18:31.409 "dhchap_digests": [ 00:18:31.409 "sha256", 00:18:31.409 "sha384", 00:18:31.409 "sha512" 00:18:31.409 ], 00:18:31.409 "dhchap_dhgroups": [ 00:18:31.409 "null", 00:18:31.409 "ffdhe2048", 00:18:31.409 "ffdhe3072", 00:18:31.409 "ffdhe4096", 00:18:31.409 "ffdhe6144", 00:18:31.409 "ffdhe8192" 00:18:31.409 ] 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "bdev_nvme_set_hotplug", 00:18:31.409 "params": { 00:18:31.409 "period_us": 100000, 00:18:31.409 "enable": false 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "bdev_malloc_create", 00:18:31.409 "params": { 00:18:31.409 "name": "malloc0", 00:18:31.409 "num_blocks": 8192, 00:18:31.409 "block_size": 4096, 00:18:31.409 "physical_block_size": 4096, 00:18:31.409 "uuid": "d011f598-74a8-41f0-8a43-534ac2399bf4", 00:18:31.409 "optimal_io_boundary": 0 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "bdev_wait_for_examine" 00:18:31.409 } 00:18:31.409 ] 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "subsystem": "nbd", 00:18:31.409 "config": [] 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "subsystem": "scheduler", 00:18:31.409 "config": [ 00:18:31.409 { 00:18:31.409 "method": "framework_set_scheduler", 00:18:31.409 "params": { 00:18:31.409 "name": "static" 00:18:31.409 } 00:18:31.409 } 00:18:31.409 ] 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "subsystem": "nvmf", 00:18:31.409 "config": [ 00:18:31.409 { 00:18:31.409 "method": "nvmf_set_config", 00:18:31.409 "params": { 00:18:31.409 "discovery_filter": "match_any", 00:18:31.409 "admin_cmd_passthru": { 00:18:31.409 "identify_ctrlr": false 00:18:31.409 } 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "nvmf_set_max_subsystems", 00:18:31.409 "params": { 00:18:31.409 "max_subsystems": 1024 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "nvmf_set_crdt", 00:18:31.409 "params": { 00:18:31.409 "crdt1": 0, 00:18:31.409 "crdt2": 0, 00:18:31.409 "crdt3": 0 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "nvmf_create_transport", 00:18:31.409 "params": { 00:18:31.409 "trtype": "TCP", 00:18:31.409 "max_queue_depth": 128, 00:18:31.409 "max_io_qpairs_per_ctrlr": 127, 00:18:31.409 "in_capsule_data_size": 4096, 00:18:31.409 "max_io_size": 131072, 00:18:31.409 "io_unit_size": 131072, 00:18:31.409 "max_aq_depth": 128, 00:18:31.409 "num_shared_buffers": 511, 00:18:31.409 "buf_cache_size": 4294967295, 00:18:31.409 "dif_insert_or_strip": false, 00:18:31.409 "zcopy": false, 00:18:31.409 "c2h_success": false, 00:18:31.409 "sock_priority": 0, 00:18:31.409 "abort_timeout_sec": 1, 00:18:31.409 "ack_timeout": 0 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "nvmf_create_subsystem", 00:18:31.409 "params": { 00:18:31.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.409 "allow_any_host": false, 00:18:31.409 "serial_number": "00000000000000000000", 00:18:31.409 "model_number": "SPDK bdev Controller", 00:18:31.409 "max_namespaces": 32, 00:18:31.409 "min_cntlid": 1, 00:18:31.409 "max_cntlid": 65519, 00:18:31.409 "ana_reporting": false 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "nvmf_subsystem_add_host", 00:18:31.409 "params": { 00:18:31.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.409 "host": "nqn.2016-06.io.spdk:host1", 00:18:31.409 "psk": "key0" 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "nvmf_subsystem_add_ns", 00:18:31.409 "params": { 00:18:31.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.409 "namespace": { 00:18:31.409 "nsid": 1, 00:18:31.409 "bdev_name": "malloc0", 00:18:31.409 "nguid": "D011F59874A841F08A43534AC2399BF4", 00:18:31.409 "uuid": "d011f598-74a8-41f0-8a43-534ac2399bf4", 00:18:31.409 "no_auto_visible": false 00:18:31.409 } 00:18:31.409 } 00:18:31.409 }, 00:18:31.409 { 00:18:31.409 "method": "nvmf_subsystem_add_listener", 00:18:31.409 "params": { 00:18:31.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.410 "listen_address": { 00:18:31.410 "trtype": "TCP", 00:18:31.410 "adrfam": "IPv4", 00:18:31.410 "traddr": "10.0.0.2", 00:18:31.410 "trsvcid": "4420" 00:18:31.410 }, 00:18:31.410 "secure_channel": true 00:18:31.410 } 00:18:31.410 } 00:18:31.410 ] 00:18:31.410 } 00:18:31.410 ] 00:18:31.410 }' 00:18:31.410 14:37:39 -- nvmf/common.sh@470 -- # nvmfpid=70568 00:18:31.410 14:37:39 -- nvmf/common.sh@471 -- # waitforlisten 70568 00:18:31.410 14:37:39 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:31.410 14:37:39 -- common/autotest_common.sh@817 -- # '[' -z 70568 ']' 00:18:31.410 14:37:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.410 14:37:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:31.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.410 14:37:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.410 14:37:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:31.410 14:37:39 -- common/autotest_common.sh@10 -- # set +x 00:18:31.410 [2024-04-17 14:37:39.952707] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:31.410 [2024-04-17 14:37:39.952831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.668 [2024-04-17 14:37:40.089563] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.668 [2024-04-17 14:37:40.151203] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.668 [2024-04-17 14:37:40.151254] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.668 [2024-04-17 14:37:40.151267] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.668 [2024-04-17 14:37:40.151275] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.668 [2024-04-17 14:37:40.151282] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.668 [2024-04-17 14:37:40.151367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.926 [2024-04-17 14:37:40.343906] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.926 [2024-04-17 14:37:40.375852] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.926 [2024-04-17 14:37:40.376099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.493 14:37:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:32.493 14:37:40 -- common/autotest_common.sh@850 -- # return 0 00:18:32.493 14:37:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:32.493 14:37:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:32.493 14:37:40 -- common/autotest_common.sh@10 -- # set +x 00:18:32.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.493 14:37:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.493 14:37:41 -- target/tls.sh@272 -- # bdevperf_pid=70601 00:18:32.493 14:37:41 -- target/tls.sh@273 -- # waitforlisten 70601 /var/tmp/bdevperf.sock 00:18:32.493 14:37:41 -- common/autotest_common.sh@817 -- # '[' -z 70601 ']' 00:18:32.493 14:37:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.493 14:37:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:32.493 14:37:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.494 14:37:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:32.494 14:37:41 -- common/autotest_common.sh@10 -- # set +x 00:18:32.494 14:37:41 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:32.494 14:37:41 -- target/tls.sh@270 -- # echo '{ 00:18:32.494 "subsystems": [ 00:18:32.494 { 00:18:32.494 "subsystem": "keyring", 00:18:32.494 "config": [ 00:18:32.494 { 00:18:32.494 "method": "keyring_file_add_key", 00:18:32.494 "params": { 00:18:32.494 "name": "key0", 00:18:32.494 "path": "/tmp/tmp.XJGziDUVsy" 00:18:32.494 } 00:18:32.494 } 00:18:32.494 ] 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "subsystem": "iobuf", 00:18:32.494 "config": [ 00:18:32.494 { 00:18:32.494 "method": "iobuf_set_options", 00:18:32.494 "params": { 00:18:32.494 "small_pool_count": 8192, 00:18:32.494 "large_pool_count": 1024, 00:18:32.494 "small_bufsize": 8192, 00:18:32.494 "large_bufsize": 135168 00:18:32.494 } 00:18:32.494 } 00:18:32.494 ] 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "subsystem": "sock", 00:18:32.494 "config": [ 00:18:32.494 { 00:18:32.494 "method": "sock_impl_set_options", 00:18:32.494 "params": { 00:18:32.494 "impl_name": "uring", 00:18:32.494 "recv_buf_size": 2097152, 00:18:32.494 "send_buf_size": 2097152, 00:18:32.494 "enable_recv_pipe": true, 00:18:32.494 "enable_quickack": false, 00:18:32.494 "enable_placement_id": 0, 00:18:32.494 "enable_zerocopy_send_server": false, 00:18:32.494 "enable_zerocopy_send_client": false, 00:18:32.494 "zerocopy_threshold": 0, 00:18:32.494 "tls_version": 0, 00:18:32.494 "enable_ktls": false 00:18:32.494 } 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "method": "sock_impl_set_options", 00:18:32.494 "params": { 00:18:32.494 "impl_name": "posix", 00:18:32.494 "recv_buf_size": 2097152, 00:18:32.494 "send_buf_size": 2097152, 00:18:32.494 "enable_recv_pipe": true, 00:18:32.494 "enable_quickack": false, 00:18:32.494 "enable_placement_id": 0, 00:18:32.494 "enable_zerocopy_send_server": true, 00:18:32.494 "enable_zerocopy_send_client": false, 00:18:32.494 "zerocopy_threshold": 0, 00:18:32.494 "tls_version": 0, 00:18:32.494 "enable_ktls": false 00:18:32.494 } 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "method": "sock_impl_set_options", 00:18:32.494 "params": { 00:18:32.494 "impl_name": "ssl", 00:18:32.494 "recv_buf_size": 4096, 00:18:32.494 "send_buf_size": 4096, 00:18:32.494 "enable_recv_pipe": true, 00:18:32.494 "enable_quickack": false, 00:18:32.494 "enable_placement_id": 0, 00:18:32.494 "enable_zerocopy_send_server": true, 00:18:32.494 "enable_zerocopy_send_client": false, 00:18:32.494 "zerocopy_threshold": 0, 00:18:32.494 "tls_version": 0, 00:18:32.494 "enable_ktls": false 00:18:32.494 } 00:18:32.494 } 00:18:32.494 ] 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "subsystem": "vmd", 00:18:32.494 "config": [] 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "subsystem": "accel", 00:18:32.494 "config": [ 00:18:32.494 { 00:18:32.494 "method": "accel_set_options", 00:18:32.494 "params": { 00:18:32.494 "small_cache_size": 128, 00:18:32.494 "large_cache_size": 16, 00:18:32.494 "task_count": 2048, 00:18:32.494 "sequence_count": 2048, 00:18:32.494 "buf_count": 2048 00:18:32.494 } 00:18:32.494 } 00:18:32.494 ] 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "subsystem": "bdev", 00:18:32.494 "config": [ 00:18:32.494 { 00:18:32.494 "method": "bdev_set_options", 00:18:32.494 "params": { 00:18:32.494 "bdev_io_pool_size": 65535, 00:18:32.494 "bdev_io_cache_size": 256, 00:18:32.494 "bdev_auto_examine": true, 00:18:32.494 "iobuf_small_cache_size": 128, 00:18:32.494 "iobuf_large_cache_size": 16 00:18:32.494 } 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "method": "bdev_raid_set_options", 00:18:32.494 "params": { 00:18:32.494 "process_window_size_kb": 1024 00:18:32.494 } 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "method": "bdev_iscsi_set_options", 00:18:32.494 "params": { 00:18:32.494 "timeout_sec": 30 00:18:32.494 } 00:18:32.494 }, 00:18:32.494 { 00:18:32.494 "method": "bdev_nvme_set_options", 00:18:32.494 "params": { 00:18:32.494 "action_on_timeout": "none", 00:18:32.494 "timeout_us": 0, 00:18:32.494 "timeout_admin_us": 0, 00:18:32.494 "keep_alive_timeout_ms": 10000, 00:18:32.494 "arbitration_burst": 0, 00:18:32.494 "low_priority_weight": 0, 00:18:32.494 "medium_priority_weight": 0, 00:18:32.494 "high_priority_weight": 0, 00:18:32.494 "nvme_adminq_poll_period_us": 10000, 00:18:32.494 "nvme_ioq_poll_period_us": 0, 00:18:32.494 "io_queue_requests": 512, 00:18:32.494 "delay_cmd_submit": true, 00:18:32.494 "transport_retry_count": 4, 00:18:32.494 "bdev_retry_count": 3, 00:18:32.494 "transport_ack_timeout": 0, 00:18:32.495 "ctrlr_loss_timeout_sec": 0, 00:18:32.495 "reconnect_delay_sec": 0, 00:18:32.495 "fast_io_fail_timeout_sec": 0, 00:18:32.495 "disable_auto_failback": false, 00:18:32.495 "generate_uuids": false, 00:18:32.495 "transport_tos": 0, 00:18:32.495 "nvme_error_stat": false, 00:18:32.495 "rdma_srq_size": 0, 00:18:32.495 "io_path_stat": false, 00:18:32.495 "allow_accel_sequence": false, 00:18:32.495 "rdma_max_cq_size": 0, 00:18:32.495 "rdma_cm_event_timeout_ms": 0, 00:18:32.495 "dhchap_digests": [ 00:18:32.495 "sha256", 00:18:32.495 "sha384", 00:18:32.495 "sha512" 00:18:32.495 ], 00:18:32.495 "dhchap_dhgroups": [ 00:18:32.495 "null", 00:18:32.495 "ffdhe2048", 00:18:32.495 "ffdhe3072", 00:18:32.495 "ffdhe4096", 00:18:32.495 "ffdhe6144", 00:18:32.495 "ffdhe8192" 00:18:32.495 ] 00:18:32.495 } 00:18:32.495 }, 00:18:32.495 { 00:18:32.495 "method": "bdev_nvme_attach_controller", 00:18:32.495 "params": { 00:18:32.495 "name": "nvme0", 00:18:32.495 "trtype": "TCP", 00:18:32.495 "adrfam": "IPv4", 00:18:32.495 "traddr": "10.0.0.2", 00:18:32.495 "trsvcid": "4420", 00:18:32.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.495 "prchk_reftag": false, 00:18:32.495 "prchk_guard": false, 00:18:32.495 "ctrlr_loss_timeout_sec": 0, 00:18:32.495 "reconnect_delay_sec": 0, 00:18:32.495 "fast_io_fail_timeout_sec": 0, 00:18:32.495 "psk": "key0", 00:18:32.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.495 "hdgst": false, 00:18:32.495 "ddgst": false 00:18:32.495 } 00:18:32.495 }, 00:18:32.495 { 00:18:32.495 "method": "bdev_nvme_set_hotplug", 00:18:32.495 "params": { 00:18:32.495 "period_us": 100000, 00:18:32.495 "enable": false 00:18:32.495 } 00:18:32.495 }, 00:18:32.495 { 00:18:32.495 "method": "bdev_enable_histogram", 00:18:32.495 "params": { 00:18:32.495 "name": "nvme0n1", 00:18:32.495 "enable": true 00:18:32.495 } 00:18:32.495 }, 00:18:32.495 { 00:18:32.495 "method": "bdev_wait_for_examine" 00:18:32.495 } 00:18:32.495 ] 00:18:32.495 }, 00:18:32.495 { 00:18:32.495 "subsystem": "nbd", 00:18:32.495 "config": [] 00:18:32.495 } 00:18:32.495 ] 00:18:32.495 }' 00:18:32.495 [2024-04-17 14:37:41.072613] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:32.495 [2024-04-17 14:37:41.072703] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70601 ] 00:18:32.754 [2024-04-17 14:37:41.208318] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.754 [2024-04-17 14:37:41.266773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.012 [2024-04-17 14:37:41.401964] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.579 14:37:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:33.579 14:37:41 -- common/autotest_common.sh@850 -- # return 0 00:18:33.579 14:37:41 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:33.579 14:37:41 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:33.838 14:37:42 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.838 14:37:42 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:33.838 Running I/O for 1 seconds... 00:18:35.239 00:18:35.239 Latency(us) 00:18:35.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.239 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.239 Verification LBA range: start 0x0 length 0x2000 00:18:35.239 nvme0n1 : 1.02 3652.80 14.27 0.00 0.00 34662.93 7596.22 29789.09 00:18:35.239 =================================================================================================================== 00:18:35.239 Total : 3652.80 14.27 0.00 0.00 34662.93 7596.22 29789.09 00:18:35.239 0 00:18:35.239 14:37:43 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:35.239 14:37:43 -- target/tls.sh@279 -- # cleanup 00:18:35.239 14:37:43 -- target/tls.sh@15 -- # process_shm --id 0 00:18:35.239 14:37:43 -- common/autotest_common.sh@794 -- # type=--id 00:18:35.239 14:37:43 -- common/autotest_common.sh@795 -- # id=0 00:18:35.239 14:37:43 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:35.239 14:37:43 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:35.239 14:37:43 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:35.239 14:37:43 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:35.239 14:37:43 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:35.239 14:37:43 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:35.239 nvmf_trace.0 00:18:35.239 14:37:43 -- common/autotest_common.sh@809 -- # return 0 00:18:35.239 14:37:43 -- target/tls.sh@16 -- # killprocess 70601 00:18:35.239 14:37:43 -- common/autotest_common.sh@936 -- # '[' -z 70601 ']' 00:18:35.239 14:37:43 -- common/autotest_common.sh@940 -- # kill -0 70601 00:18:35.239 14:37:43 -- common/autotest_common.sh@941 -- # uname 00:18:35.239 14:37:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.239 14:37:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70601 00:18:35.239 killing process with pid 70601 00:18:35.239 Received shutdown signal, test time was about 1.000000 seconds 00:18:35.239 00:18:35.239 Latency(us) 00:18:35.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.239 =================================================================================================================== 00:18:35.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.239 14:37:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:35.239 14:37:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:35.239 14:37:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70601' 00:18:35.239 14:37:43 -- common/autotest_common.sh@955 -- # kill 70601 00:18:35.239 14:37:43 -- common/autotest_common.sh@960 -- # wait 70601 00:18:35.239 14:37:43 -- target/tls.sh@17 -- # nvmftestfini 00:18:35.239 14:37:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:35.239 14:37:43 -- nvmf/common.sh@117 -- # sync 00:18:35.239 14:37:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.239 14:37:43 -- nvmf/common.sh@120 -- # set +e 00:18:35.239 14:37:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.239 14:37:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.239 rmmod nvme_tcp 00:18:35.239 rmmod nvme_fabrics 00:18:35.239 rmmod nvme_keyring 00:18:35.239 14:37:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.239 14:37:43 -- nvmf/common.sh@124 -- # set -e 00:18:35.239 14:37:43 -- nvmf/common.sh@125 -- # return 0 00:18:35.239 14:37:43 -- nvmf/common.sh@478 -- # '[' -n 70568 ']' 00:18:35.239 14:37:43 -- nvmf/common.sh@479 -- # killprocess 70568 00:18:35.239 14:37:43 -- common/autotest_common.sh@936 -- # '[' -z 70568 ']' 00:18:35.239 14:37:43 -- common/autotest_common.sh@940 -- # kill -0 70568 00:18:35.239 14:37:43 -- common/autotest_common.sh@941 -- # uname 00:18:35.239 14:37:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.239 14:37:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70568 00:18:35.497 killing process with pid 70568 00:18:35.497 14:37:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.497 14:37:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.497 14:37:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70568' 00:18:35.497 14:37:43 -- common/autotest_common.sh@955 -- # kill 70568 00:18:35.497 14:37:43 -- common/autotest_common.sh@960 -- # wait 70568 00:18:35.497 14:37:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:35.497 14:37:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:35.497 14:37:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:35.497 14:37:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.497 14:37:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.497 14:37:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.497 14:37:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.497 14:37:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.497 14:37:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:35.497 14:37:44 -- target/tls.sh@18 -- # rm -f /tmp/tmp.O5T78NB3Uf /tmp/tmp.tnwllK4shI /tmp/tmp.XJGziDUVsy 00:18:35.497 ************************************ 00:18:35.497 END TEST nvmf_tls 00:18:35.497 ************************************ 00:18:35.497 00:18:35.497 real 1m24.342s 00:18:35.497 user 2m16.173s 00:18:35.497 sys 0m25.841s 00:18:35.497 14:37:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:35.497 14:37:44 -- common/autotest_common.sh@10 -- # set +x 00:18:35.755 14:37:44 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:35.755 14:37:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:35.755 14:37:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:35.755 14:37:44 -- common/autotest_common.sh@10 -- # set +x 00:18:35.755 ************************************ 00:18:35.755 START TEST nvmf_fips 00:18:35.755 ************************************ 00:18:35.755 14:37:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:35.755 * Looking for test storage... 00:18:35.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:35.755 14:37:44 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.755 14:37:44 -- nvmf/common.sh@7 -- # uname -s 00:18:35.755 14:37:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.755 14:37:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.755 14:37:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.755 14:37:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.755 14:37:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.755 14:37:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.755 14:37:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.755 14:37:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.755 14:37:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.755 14:37:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.755 14:37:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:18:35.755 14:37:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:18:35.755 14:37:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.755 14:37:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.755 14:37:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.755 14:37:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.755 14:37:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.755 14:37:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.755 14:37:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.755 14:37:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.755 14:37:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.755 14:37:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.755 14:37:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.755 14:37:44 -- paths/export.sh@5 -- # export PATH 00:18:35.755 14:37:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.755 14:37:44 -- nvmf/common.sh@47 -- # : 0 00:18:35.755 14:37:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.755 14:37:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.755 14:37:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.755 14:37:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.755 14:37:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.755 14:37:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.755 14:37:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.755 14:37:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.755 14:37:44 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.755 14:37:44 -- fips/fips.sh@89 -- # check_openssl_version 00:18:35.755 14:37:44 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:35.755 14:37:44 -- fips/fips.sh@85 -- # openssl version 00:18:35.755 14:37:44 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:35.755 14:37:44 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:35.755 14:37:44 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:35.755 14:37:44 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:35.755 14:37:44 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:35.755 14:37:44 -- scripts/common.sh@333 -- # IFS=.-: 00:18:35.755 14:37:44 -- scripts/common.sh@333 -- # read -ra ver1 00:18:35.755 14:37:44 -- scripts/common.sh@334 -- # IFS=.-: 00:18:35.755 14:37:44 -- scripts/common.sh@334 -- # read -ra ver2 00:18:35.755 14:37:44 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:35.755 14:37:44 -- scripts/common.sh@337 -- # ver1_l=3 00:18:35.755 14:37:44 -- scripts/common.sh@338 -- # ver2_l=3 00:18:35.755 14:37:44 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:35.755 14:37:44 -- scripts/common.sh@341 -- # case "$op" in 00:18:35.755 14:37:44 -- scripts/common.sh@345 -- # : 1 00:18:35.755 14:37:44 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:35.755 14:37:44 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.755 14:37:44 -- scripts/common.sh@362 -- # decimal 3 00:18:35.755 14:37:44 -- scripts/common.sh@350 -- # local d=3 00:18:35.755 14:37:44 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:35.755 14:37:44 -- scripts/common.sh@352 -- # echo 3 00:18:35.755 14:37:44 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:35.755 14:37:44 -- scripts/common.sh@363 -- # decimal 3 00:18:35.755 14:37:44 -- scripts/common.sh@350 -- # local d=3 00:18:35.755 14:37:44 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:35.755 14:37:44 -- scripts/common.sh@352 -- # echo 3 00:18:35.755 14:37:44 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:35.755 14:37:44 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:35.755 14:37:44 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:35.755 14:37:44 -- scripts/common.sh@361 -- # (( v++ )) 00:18:35.755 14:37:44 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.755 14:37:44 -- scripts/common.sh@362 -- # decimal 0 00:18:35.755 14:37:44 -- scripts/common.sh@350 -- # local d=0 00:18:35.755 14:37:44 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:35.755 14:37:44 -- scripts/common.sh@352 -- # echo 0 00:18:35.755 14:37:44 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:35.755 14:37:44 -- scripts/common.sh@363 -- # decimal 0 00:18:35.755 14:37:44 -- scripts/common.sh@350 -- # local d=0 00:18:35.755 14:37:44 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:35.755 14:37:44 -- scripts/common.sh@352 -- # echo 0 00:18:35.755 14:37:44 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:35.755 14:37:44 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:35.755 14:37:44 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:35.755 14:37:44 -- scripts/common.sh@361 -- # (( v++ )) 00:18:35.755 14:37:44 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.755 14:37:44 -- scripts/common.sh@362 -- # decimal 9 00:18:35.755 14:37:44 -- scripts/common.sh@350 -- # local d=9 00:18:35.755 14:37:44 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:35.755 14:37:44 -- scripts/common.sh@352 -- # echo 9 00:18:35.755 14:37:44 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:35.755 14:37:44 -- scripts/common.sh@363 -- # decimal 0 00:18:35.755 14:37:44 -- scripts/common.sh@350 -- # local d=0 00:18:35.755 14:37:44 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:35.755 14:37:44 -- scripts/common.sh@352 -- # echo 0 00:18:35.755 14:37:44 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:35.755 14:37:44 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:35.755 14:37:44 -- scripts/common.sh@364 -- # return 0 00:18:35.755 14:37:44 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:35.755 14:37:44 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:35.755 14:37:44 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:35.755 14:37:44 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:35.755 14:37:44 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:35.755 14:37:44 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:35.755 14:37:44 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:35.755 14:37:44 -- fips/fips.sh@113 -- # build_openssl_config 00:18:35.755 14:37:44 -- fips/fips.sh@37 -- # cat 00:18:35.755 14:37:44 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:35.755 14:37:44 -- fips/fips.sh@58 -- # cat - 00:18:35.755 14:37:44 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:35.755 14:37:44 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:35.755 14:37:44 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:35.755 14:37:44 -- fips/fips.sh@116 -- # openssl list -providers 00:18:35.755 14:37:44 -- fips/fips.sh@116 -- # grep name 00:18:36.013 14:37:44 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:36.013 14:37:44 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:36.013 14:37:44 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:36.013 14:37:44 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:36.013 14:37:44 -- fips/fips.sh@127 -- # : 00:18:36.013 14:37:44 -- common/autotest_common.sh@638 -- # local es=0 00:18:36.013 14:37:44 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:36.013 14:37:44 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:36.013 14:37:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:36.013 14:37:44 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:36.013 14:37:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:36.013 14:37:44 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:36.013 14:37:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:36.013 14:37:44 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:36.013 14:37:44 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:36.013 14:37:44 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:36.013 Error setting digest 00:18:36.013 00C2284E0A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:36.013 00C2284E0A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:36.013 14:37:44 -- common/autotest_common.sh@641 -- # es=1 00:18:36.013 14:37:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:36.013 14:37:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:36.013 14:37:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:36.013 14:37:44 -- fips/fips.sh@130 -- # nvmftestinit 00:18:36.013 14:37:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:36.013 14:37:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.013 14:37:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:36.013 14:37:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:36.013 14:37:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:36.013 14:37:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.013 14:37:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.013 14:37:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.013 14:37:44 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:36.013 14:37:44 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:36.013 14:37:44 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:36.013 14:37:44 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:36.013 14:37:44 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:36.013 14:37:44 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:36.013 14:37:44 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.013 14:37:44 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.013 14:37:44 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:36.013 14:37:44 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:36.013 14:37:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:36.013 14:37:44 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:36.013 14:37:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:36.013 14:37:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.013 14:37:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:36.013 14:37:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:36.013 14:37:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:36.013 14:37:44 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:36.013 14:37:44 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:36.013 14:37:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:36.013 Cannot find device "nvmf_tgt_br" 00:18:36.013 14:37:44 -- nvmf/common.sh@155 -- # true 00:18:36.013 14:37:44 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:36.013 Cannot find device "nvmf_tgt_br2" 00:18:36.013 14:37:44 -- nvmf/common.sh@156 -- # true 00:18:36.013 14:37:44 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:36.013 14:37:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:36.013 Cannot find device "nvmf_tgt_br" 00:18:36.013 14:37:44 -- nvmf/common.sh@158 -- # true 00:18:36.013 14:37:44 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:36.013 Cannot find device "nvmf_tgt_br2" 00:18:36.013 14:37:44 -- nvmf/common.sh@159 -- # true 00:18:36.013 14:37:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:36.013 14:37:44 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:36.013 14:37:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:36.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.013 14:37:44 -- nvmf/common.sh@162 -- # true 00:18:36.013 14:37:44 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:36.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.013 14:37:44 -- nvmf/common.sh@163 -- # true 00:18:36.013 14:37:44 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:36.013 14:37:44 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:36.013 14:37:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:36.013 14:37:44 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:36.013 14:37:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:36.013 14:37:44 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:36.013 14:37:44 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:36.013 14:37:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:36.013 14:37:44 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:36.271 14:37:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:36.271 14:37:44 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:36.271 14:37:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:36.271 14:37:44 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:36.271 14:37:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:36.271 14:37:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:36.271 14:37:44 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:36.271 14:37:44 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:36.271 14:37:44 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:36.271 14:37:44 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:36.271 14:37:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:36.271 14:37:44 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:36.271 14:37:44 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:36.271 14:37:44 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:36.271 14:37:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:36.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:36.271 00:18:36.271 --- 10.0.0.2 ping statistics --- 00:18:36.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.272 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:36.272 14:37:44 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:36.272 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:36.272 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:36.272 00:18:36.272 --- 10.0.0.3 ping statistics --- 00:18:36.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.272 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:36.272 14:37:44 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:36.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:36.272 00:18:36.272 --- 10.0.0.1 ping statistics --- 00:18:36.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.272 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:36.272 14:37:44 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.272 14:37:44 -- nvmf/common.sh@422 -- # return 0 00:18:36.272 14:37:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:36.272 14:37:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.272 14:37:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:36.272 14:37:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:36.272 14:37:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.272 14:37:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:36.272 14:37:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:36.272 14:37:44 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:36.272 14:37:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:36.272 14:37:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:36.272 14:37:44 -- common/autotest_common.sh@10 -- # set +x 00:18:36.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.272 14:37:44 -- nvmf/common.sh@470 -- # nvmfpid=70867 00:18:36.272 14:37:44 -- nvmf/common.sh@471 -- # waitforlisten 70867 00:18:36.272 14:37:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.272 14:37:44 -- common/autotest_common.sh@817 -- # '[' -z 70867 ']' 00:18:36.272 14:37:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.272 14:37:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:36.272 14:37:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.272 14:37:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:36.272 14:37:44 -- common/autotest_common.sh@10 -- # set +x 00:18:36.272 [2024-04-17 14:37:44.817565] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:36.272 [2024-04-17 14:37:44.817654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.529 [2024-04-17 14:37:44.958395] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.529 [2024-04-17 14:37:45.019104] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.529 [2024-04-17 14:37:45.019165] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.529 [2024-04-17 14:37:45.019184] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.529 [2024-04-17 14:37:45.019198] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.529 [2024-04-17 14:37:45.019209] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.529 [2024-04-17 14:37:45.019243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.462 14:37:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.462 14:37:45 -- common/autotest_common.sh@850 -- # return 0 00:18:37.462 14:37:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:37.462 14:37:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.462 14:37:45 -- common/autotest_common.sh@10 -- # set +x 00:18:37.462 14:37:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.462 14:37:45 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:37.462 14:37:45 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:37.462 14:37:45 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:37.462 14:37:45 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:37.462 14:37:45 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:37.462 14:37:45 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:37.462 14:37:45 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:37.462 14:37:45 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.462 [2024-04-17 14:37:46.016318] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.462 [2024-04-17 14:37:46.032265] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.462 [2024-04-17 14:37:46.032456] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.462 [2024-04-17 14:37:46.059196] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:37.721 malloc0 00:18:37.721 14:37:46 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.721 14:37:46 -- fips/fips.sh@147 -- # bdevperf_pid=70907 00:18:37.721 14:37:46 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.721 14:37:46 -- fips/fips.sh@148 -- # waitforlisten 70907 /var/tmp/bdevperf.sock 00:18:37.721 14:37:46 -- common/autotest_common.sh@817 -- # '[' -z 70907 ']' 00:18:37.721 14:37:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.721 14:37:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.721 14:37:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.721 14:37:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.721 14:37:46 -- common/autotest_common.sh@10 -- # set +x 00:18:37.721 [2024-04-17 14:37:46.191247] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:37.721 [2024-04-17 14:37:46.191645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70907 ] 00:18:37.980 [2024-04-17 14:37:46.332450] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.980 [2024-04-17 14:37:46.411314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.981 14:37:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:38.981 14:37:47 -- common/autotest_common.sh@850 -- # return 0 00:18:38.981 14:37:47 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:39.263 [2024-04-17 14:37:47.608192] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.263 [2024-04-17 14:37:47.608324] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:39.263 TLSTESTn1 00:18:39.263 14:37:47 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.521 Running I/O for 10 seconds... 00:18:49.538 00:18:49.538 Latency(us) 00:18:49.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.538 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.538 Verification LBA range: start 0x0 length 0x2000 00:18:49.538 TLSTESTn1 : 10.04 3150.22 12.31 0.00 0.00 40527.42 8996.31 36700.16 00:18:49.538 =================================================================================================================== 00:18:49.538 Total : 3150.22 12.31 0.00 0.00 40527.42 8996.31 36700.16 00:18:49.538 0 00:18:49.538 14:37:57 -- fips/fips.sh@1 -- # cleanup 00:18:49.538 14:37:57 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:49.538 14:37:57 -- common/autotest_common.sh@794 -- # type=--id 00:18:49.538 14:37:57 -- common/autotest_common.sh@795 -- # id=0 00:18:49.538 14:37:57 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:49.538 14:37:57 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:49.538 14:37:57 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:49.538 14:37:57 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:49.538 14:37:57 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:49.538 14:37:57 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:49.538 nvmf_trace.0 00:18:49.538 14:37:58 -- common/autotest_common.sh@809 -- # return 0 00:18:49.538 14:37:58 -- fips/fips.sh@16 -- # killprocess 70907 00:18:49.538 14:37:58 -- common/autotest_common.sh@936 -- # '[' -z 70907 ']' 00:18:49.538 14:37:58 -- common/autotest_common.sh@940 -- # kill -0 70907 00:18:49.538 14:37:58 -- common/autotest_common.sh@941 -- # uname 00:18:49.538 14:37:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.538 14:37:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70907 00:18:49.538 killing process with pid 70907 00:18:49.538 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.538 00:18:49.538 Latency(us) 00:18:49.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.538 =================================================================================================================== 00:18:49.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.538 14:37:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:49.538 14:37:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:49.538 14:37:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70907' 00:18:49.538 14:37:58 -- common/autotest_common.sh@955 -- # kill 70907 00:18:49.538 [2024-04-17 14:37:58.030986] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:49.538 14:37:58 -- common/autotest_common.sh@960 -- # wait 70907 00:18:49.797 14:37:58 -- fips/fips.sh@17 -- # nvmftestfini 00:18:49.797 14:37:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:49.797 14:37:58 -- nvmf/common.sh@117 -- # sync 00:18:49.797 14:37:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.797 14:37:58 -- nvmf/common.sh@120 -- # set +e 00:18:49.797 14:37:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.797 14:37:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.797 rmmod nvme_tcp 00:18:49.797 rmmod nvme_fabrics 00:18:49.797 rmmod nvme_keyring 00:18:49.797 14:37:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.797 14:37:58 -- nvmf/common.sh@124 -- # set -e 00:18:49.797 14:37:58 -- nvmf/common.sh@125 -- # return 0 00:18:49.797 14:37:58 -- nvmf/common.sh@478 -- # '[' -n 70867 ']' 00:18:49.797 14:37:58 -- nvmf/common.sh@479 -- # killprocess 70867 00:18:49.797 14:37:58 -- common/autotest_common.sh@936 -- # '[' -z 70867 ']' 00:18:49.797 14:37:58 -- common/autotest_common.sh@940 -- # kill -0 70867 00:18:49.797 14:37:58 -- common/autotest_common.sh@941 -- # uname 00:18:49.797 14:37:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.797 14:37:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70867 00:18:49.797 killing process with pid 70867 00:18:49.797 14:37:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:49.797 14:37:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:49.797 14:37:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70867' 00:18:49.797 14:37:58 -- common/autotest_common.sh@955 -- # kill 70867 00:18:49.797 [2024-04-17 14:37:58.331673] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:49.797 14:37:58 -- common/autotest_common.sh@960 -- # wait 70867 00:18:50.057 14:37:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:50.057 14:37:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:50.057 14:37:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:50.057 14:37:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.057 14:37:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.057 14:37:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.057 14:37:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.057 14:37:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.057 14:37:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:50.057 14:37:58 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:50.057 00:18:50.057 real 0m14.444s 00:18:50.057 user 0m19.958s 00:18:50.057 sys 0m5.839s 00:18:50.057 14:37:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:50.057 14:37:58 -- common/autotest_common.sh@10 -- # set +x 00:18:50.057 ************************************ 00:18:50.057 END TEST nvmf_fips 00:18:50.057 ************************************ 00:18:50.057 14:37:58 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:18:50.057 14:37:58 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:18:50.057 14:37:58 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:18:50.057 14:37:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:50.057 14:37:58 -- common/autotest_common.sh@10 -- # set +x 00:18:50.316 14:37:58 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:18:50.316 14:37:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:50.316 14:37:58 -- common/autotest_common.sh@10 -- # set +x 00:18:50.316 14:37:58 -- nvmf/nvmf.sh@88 -- # [[ 1 -eq 0 ]] 00:18:50.316 14:37:58 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:50.316 14:37:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:50.316 14:37:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:50.316 14:37:58 -- common/autotest_common.sh@10 -- # set +x 00:18:50.316 ************************************ 00:18:50.316 START TEST nvmf_identify 00:18:50.316 ************************************ 00:18:50.316 14:37:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:50.316 * Looking for test storage... 00:18:50.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:50.316 14:37:58 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:50.316 14:37:58 -- nvmf/common.sh@7 -- # uname -s 00:18:50.316 14:37:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.316 14:37:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.316 14:37:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.316 14:37:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.316 14:37:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.316 14:37:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.316 14:37:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.316 14:37:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.316 14:37:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.316 14:37:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.316 14:37:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:18:50.316 14:37:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:18:50.316 14:37:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.317 14:37:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.317 14:37:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:50.317 14:37:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.317 14:37:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:50.317 14:37:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.317 14:37:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.317 14:37:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.317 14:37:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.317 14:37:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.317 14:37:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.317 14:37:58 -- paths/export.sh@5 -- # export PATH 00:18:50.317 14:37:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.317 14:37:58 -- nvmf/common.sh@47 -- # : 0 00:18:50.317 14:37:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:50.317 14:37:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:50.317 14:37:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.317 14:37:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.317 14:37:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.317 14:37:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:50.317 14:37:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:50.317 14:37:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:50.317 14:37:58 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.317 14:37:58 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.317 14:37:58 -- host/identify.sh@14 -- # nvmftestinit 00:18:50.317 14:37:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:50.317 14:37:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.317 14:37:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:50.317 14:37:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:50.317 14:37:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:50.317 14:37:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.317 14:37:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.317 14:37:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.317 14:37:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:50.317 14:37:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:50.317 14:37:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:50.317 14:37:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:50.317 14:37:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:50.317 14:37:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:50.317 14:37:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.317 14:37:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.317 14:37:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:50.317 14:37:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:50.317 14:37:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:50.317 14:37:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:50.317 14:37:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:50.317 14:37:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.317 14:37:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:50.317 14:37:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:50.317 14:37:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:50.317 14:37:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:50.317 14:37:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:50.317 14:37:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:50.317 Cannot find device "nvmf_tgt_br" 00:18:50.317 14:37:58 -- nvmf/common.sh@155 -- # true 00:18:50.317 14:37:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.317 Cannot find device "nvmf_tgt_br2" 00:18:50.317 14:37:58 -- nvmf/common.sh@156 -- # true 00:18:50.317 14:37:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:50.317 14:37:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:50.317 Cannot find device "nvmf_tgt_br" 00:18:50.317 14:37:58 -- nvmf/common.sh@158 -- # true 00:18:50.317 14:37:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:50.317 Cannot find device "nvmf_tgt_br2" 00:18:50.317 14:37:58 -- nvmf/common.sh@159 -- # true 00:18:50.317 14:37:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:50.576 14:37:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:50.576 14:37:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.576 14:37:58 -- nvmf/common.sh@162 -- # true 00:18:50.576 14:37:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.576 14:37:58 -- nvmf/common.sh@163 -- # true 00:18:50.576 14:37:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:50.576 14:37:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:50.576 14:37:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:50.576 14:37:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:50.576 14:37:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:50.576 14:37:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:50.576 14:37:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:50.576 14:37:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:50.576 14:37:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:50.576 14:37:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:50.576 14:37:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:50.576 14:37:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:50.576 14:37:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:50.576 14:37:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:50.576 14:37:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:50.576 14:37:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:50.576 14:37:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:50.576 14:37:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:50.576 14:37:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:50.576 14:37:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:50.576 14:37:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:50.835 14:37:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:50.835 14:37:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:50.835 14:37:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:50.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:18:50.835 00:18:50.835 --- 10.0.0.2 ping statistics --- 00:18:50.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.835 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:18:50.836 14:37:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:50.836 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:50.836 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:50.836 00:18:50.836 --- 10.0.0.3 ping statistics --- 00:18:50.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.836 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:50.836 14:37:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:50.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:50.836 00:18:50.836 --- 10.0.0.1 ping statistics --- 00:18:50.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.836 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:50.836 14:37:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.836 14:37:59 -- nvmf/common.sh@422 -- # return 0 00:18:50.836 14:37:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:50.836 14:37:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.836 14:37:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:50.836 14:37:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:50.836 14:37:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.836 14:37:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:50.836 14:37:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:50.836 14:37:59 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:50.836 14:37:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:50.836 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:50.836 14:37:59 -- host/identify.sh@19 -- # nvmfpid=71255 00:18:50.836 14:37:59 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:50.836 14:37:59 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.836 14:37:59 -- host/identify.sh@23 -- # waitforlisten 71255 00:18:50.836 14:37:59 -- common/autotest_common.sh@817 -- # '[' -z 71255 ']' 00:18:50.836 14:37:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.836 14:37:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.836 14:37:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.836 14:37:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.836 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:50.836 [2024-04-17 14:37:59.294214] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:50.836 [2024-04-17 14:37:59.294343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.836 [2024-04-17 14:37:59.434017] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.095 [2024-04-17 14:37:59.495295] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.095 [2024-04-17 14:37:59.495562] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.095 [2024-04-17 14:37:59.495773] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.095 [2024-04-17 14:37:59.496160] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.095 [2024-04-17 14:37:59.496314] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.095 [2024-04-17 14:37:59.496665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.095 [2024-04-17 14:37:59.496737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.095 [2024-04-17 14:37:59.496818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.095 [2024-04-17 14:37:59.496829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.095 14:37:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.095 14:37:59 -- common/autotest_common.sh@850 -- # return 0 00:18:51.095 14:37:59 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:51.095 14:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.095 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.096 [2024-04-17 14:37:59.590779] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.096 14:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.096 14:37:59 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:51.096 14:37:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:51.096 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.096 14:37:59 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.096 14:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.096 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.096 Malloc0 00:18:51.096 14:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.096 14:37:59 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.096 14:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.096 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.096 14:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.096 14:37:59 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:51.096 14:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.096 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.096 14:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.096 14:37:59 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.096 14:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.096 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.096 [2024-04-17 14:37:59.694177] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.357 14:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.357 14:37:59 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:51.357 14:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.357 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.357 14:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.357 14:37:59 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:51.357 14:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.357 14:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.357 [2024-04-17 14:37:59.709803] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:18:51.357 [ 00:18:51.357 { 00:18:51.357 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:51.357 "subtype": "Discovery", 00:18:51.357 "listen_addresses": [ 00:18:51.357 { 00:18:51.357 "transport": "TCP", 00:18:51.357 "trtype": "TCP", 00:18:51.357 "adrfam": "IPv4", 00:18:51.357 "traddr": "10.0.0.2", 00:18:51.357 "trsvcid": "4420" 00:18:51.357 } 00:18:51.357 ], 00:18:51.357 "allow_any_host": true, 00:18:51.357 "hosts": [] 00:18:51.357 }, 00:18:51.357 { 00:18:51.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.357 "subtype": "NVMe", 00:18:51.357 "listen_addresses": [ 00:18:51.357 { 00:18:51.357 "transport": "TCP", 00:18:51.357 "trtype": "TCP", 00:18:51.357 "adrfam": "IPv4", 00:18:51.357 "traddr": "10.0.0.2", 00:18:51.357 "trsvcid": "4420" 00:18:51.357 } 00:18:51.357 ], 00:18:51.357 "allow_any_host": true, 00:18:51.357 "hosts": [], 00:18:51.357 "serial_number": "SPDK00000000000001", 00:18:51.357 "model_number": "SPDK bdev Controller", 00:18:51.357 "max_namespaces": 32, 00:18:51.357 "min_cntlid": 1, 00:18:51.357 "max_cntlid": 65519, 00:18:51.357 "namespaces": [ 00:18:51.357 { 00:18:51.357 "nsid": 1, 00:18:51.357 "bdev_name": "Malloc0", 00:18:51.357 "name": "Malloc0", 00:18:51.357 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:51.357 "eui64": "ABCDEF0123456789", 00:18:51.357 "uuid": "d26c2a09-0d16-4dc6-b886-5ba841c3d095" 00:18:51.357 } 00:18:51.357 ] 00:18:51.357 } 00:18:51.357 ] 00:18:51.357 14:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.357 14:37:59 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:51.357 [2024-04-17 14:37:59.747247] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:51.357 [2024-04-17 14:37:59.747548] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71282 ] 00:18:51.357 [2024-04-17 14:37:59.897004] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:51.357 [2024-04-17 14:37:59.897101] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:51.357 [2024-04-17 14:37:59.897109] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:51.357 [2024-04-17 14:37:59.897125] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:51.357 [2024-04-17 14:37:59.897141] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:18:51.357 [2024-04-17 14:37:59.897328] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:51.357 [2024-04-17 14:37:59.897402] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ecd300 0 00:18:51.357 [2024-04-17 14:37:59.903999] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:51.357 [2024-04-17 14:37:59.904037] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:51.357 [2024-04-17 14:37:59.904045] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:51.357 [2024-04-17 14:37:59.904049] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:51.357 [2024-04-17 14:37:59.904097] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.904105] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.904110] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.357 [2024-04-17 14:37:59.904127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:51.357 [2024-04-17 14:37:59.904165] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.357 [2024-04-17 14:37:59.911993] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.357 [2024-04-17 14:37:59.912033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.357 [2024-04-17 14:37:59.912040] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.912046] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.357 [2024-04-17 14:37:59.912064] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:51.357 [2024-04-17 14:37:59.912077] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:51.357 [2024-04-17 14:37:59.912083] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:51.357 [2024-04-17 14:37:59.912109] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.912116] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.912120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.357 [2024-04-17 14:37:59.912135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.357 [2024-04-17 14:37:59.912174] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.357 [2024-04-17 14:37:59.912354] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.357 [2024-04-17 14:37:59.912373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.357 [2024-04-17 14:37:59.912381] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.912388] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.357 [2024-04-17 14:37:59.912405] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:51.357 [2024-04-17 14:37:59.912420] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:51.357 [2024-04-17 14:37:59.912434] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.912442] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.912449] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.357 [2024-04-17 14:37:59.912462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.357 [2024-04-17 14:37:59.912501] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.357 [2024-04-17 14:37:59.912630] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.357 [2024-04-17 14:37:59.912661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.357 [2024-04-17 14:37:59.912670] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.357 [2024-04-17 14:37:59.912677] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.357 [2024-04-17 14:37:59.912690] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:51.357 [2024-04-17 14:37:59.912707] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:51.358 [2024-04-17 14:37:59.912722] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.912730] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.912737] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.358 [2024-04-17 14:37:59.912751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.358 [2024-04-17 14:37:59.912790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.358 [2024-04-17 14:37:59.912912] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.358 [2024-04-17 14:37:59.912972] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.358 [2024-04-17 14:37:59.912984] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.912992] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.358 [2024-04-17 14:37:59.913004] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:51.358 [2024-04-17 14:37:59.913023] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913032] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913040] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.358 [2024-04-17 14:37:59.913054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.358 [2024-04-17 14:37:59.913092] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.358 [2024-04-17 14:37:59.913215] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.358 [2024-04-17 14:37:59.913248] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.358 [2024-04-17 14:37:59.913256] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913264] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.358 [2024-04-17 14:37:59.913274] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:51.358 [2024-04-17 14:37:59.913284] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:51.358 [2024-04-17 14:37:59.913299] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:51.358 [2024-04-17 14:37:59.913410] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:51.358 [2024-04-17 14:37:59.913428] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:51.358 [2024-04-17 14:37:59.913447] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913455] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913462] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.358 [2024-04-17 14:37:59.913475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.358 [2024-04-17 14:37:59.913515] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.358 [2024-04-17 14:37:59.913645] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.358 [2024-04-17 14:37:59.913672] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.358 [2024-04-17 14:37:59.913681] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913689] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.358 [2024-04-17 14:37:59.913700] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:51.358 [2024-04-17 14:37:59.913719] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913727] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913734] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.358 [2024-04-17 14:37:59.913747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.358 [2024-04-17 14:37:59.913781] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.358 [2024-04-17 14:37:59.913915] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.358 [2024-04-17 14:37:59.913935] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.358 [2024-04-17 14:37:59.913943] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.913978] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.358 [2024-04-17 14:37:59.913990] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:51.358 [2024-04-17 14:37:59.913998] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:51.358 [2024-04-17 14:37:59.914009] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:51.358 [2024-04-17 14:37:59.914022] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:51.358 [2024-04-17 14:37:59.914038] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.914043] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.358 [2024-04-17 14:37:59.914053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.358 [2024-04-17 14:37:59.914081] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.358 [2024-04-17 14:37:59.914265] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.358 [2024-04-17 14:37:59.914277] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.358 [2024-04-17 14:37:59.914282] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.914286] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ecd300): datao=0, datal=4096, cccid=0 00:18:51.358 [2024-04-17 14:37:59.914292] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f159c0) on tqpair(0x1ecd300): expected_datao=0, payload_size=4096 00:18:51.358 [2024-04-17 14:37:59.914297] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.914308] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.914317] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.914340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.358 [2024-04-17 14:37:59.914353] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.358 [2024-04-17 14:37:59.914359] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.914365] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.358 [2024-04-17 14:37:59.914381] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:51.358 [2024-04-17 14:37:59.914393] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:51.358 [2024-04-17 14:37:59.914407] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:51.358 [2024-04-17 14:37:59.914424] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:51.358 [2024-04-17 14:37:59.914433] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:51.358 [2024-04-17 14:37:59.914442] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:51.358 [2024-04-17 14:37:59.914458] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:51.358 [2024-04-17 14:37:59.914472] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.914480] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.358 [2024-04-17 14:37:59.914486] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.358 [2024-04-17 14:37:59.914499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:51.359 [2024-04-17 14:37:59.914539] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.359 [2024-04-17 14:37:59.914672] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.359 [2024-04-17 14:37:59.914704] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.359 [2024-04-17 14:37:59.914713] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914721] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f159c0) on tqpair=0x1ecd300 00:18:51.359 [2024-04-17 14:37:59.914738] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914747] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914754] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.914767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.359 [2024-04-17 14:37:59.914778] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914786] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914793] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.914803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.359 [2024-04-17 14:37:59.914815] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914823] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914829] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.914840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.359 [2024-04-17 14:37:59.914851] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914858] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914865] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.914876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.359 [2024-04-17 14:37:59.914887] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:51.359 [2024-04-17 14:37:59.914917] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:51.359 [2024-04-17 14:37:59.914932] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.914940] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.914973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.359 [2024-04-17 14:37:59.915020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f159c0, cid 0, qid 0 00:18:51.359 [2024-04-17 14:37:59.915034] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15b20, cid 1, qid 0 00:18:51.359 [2024-04-17 14:37:59.915042] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15c80, cid 2, qid 0 00:18:51.359 [2024-04-17 14:37:59.915051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.359 [2024-04-17 14:37:59.915059] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15f40, cid 4, qid 0 00:18:51.359 [2024-04-17 14:37:59.915322] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.359 [2024-04-17 14:37:59.915347] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.359 [2024-04-17 14:37:59.915356] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915364] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15f40) on tqpair=0x1ecd300 00:18:51.359 [2024-04-17 14:37:59.915376] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:51.359 [2024-04-17 14:37:59.915386] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:51.359 [2024-04-17 14:37:59.915410] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915420] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.915434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.359 [2024-04-17 14:37:59.915470] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15f40, cid 4, qid 0 00:18:51.359 [2024-04-17 14:37:59.915628] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.359 [2024-04-17 14:37:59.915650] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.359 [2024-04-17 14:37:59.915659] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915665] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ecd300): datao=0, datal=4096, cccid=4 00:18:51.359 [2024-04-17 14:37:59.915674] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f15f40) on tqpair(0x1ecd300): expected_datao=0, payload_size=4096 00:18:51.359 [2024-04-17 14:37:59.915682] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915695] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915703] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915718] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.359 [2024-04-17 14:37:59.915728] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.359 [2024-04-17 14:37:59.915734] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915742] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15f40) on tqpair=0x1ecd300 00:18:51.359 [2024-04-17 14:37:59.915767] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:51.359 [2024-04-17 14:37:59.915802] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915812] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.915825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.359 [2024-04-17 14:37:59.915838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915845] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.915852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.915863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.359 [2024-04-17 14:37:59.915909] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15f40, cid 4, qid 0 00:18:51.359 [2024-04-17 14:37:59.915922] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f160a0, cid 5, qid 0 00:18:51.359 [2024-04-17 14:37:59.919993] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.359 [2024-04-17 14:37:59.920037] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.359 [2024-04-17 14:37:59.920049] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920057] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ecd300): datao=0, datal=1024, cccid=4 00:18:51.359 [2024-04-17 14:37:59.920067] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f15f40) on tqpair(0x1ecd300): expected_datao=0, payload_size=1024 00:18:51.359 [2024-04-17 14:37:59.920075] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920089] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920097] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920106] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.359 [2024-04-17 14:37:59.920116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.359 [2024-04-17 14:37:59.920122] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920129] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f160a0) on tqpair=0x1ecd300 00:18:51.359 [2024-04-17 14:37:59.920142] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.359 [2024-04-17 14:37:59.920152] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.359 [2024-04-17 14:37:59.920158] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920165] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15f40) on tqpair=0x1ecd300 00:18:51.359 [2024-04-17 14:37:59.920204] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920213] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ecd300) 00:18:51.359 [2024-04-17 14:37:59.920224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.359 [2024-04-17 14:37:59.920267] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15f40, cid 4, qid 0 00:18:51.359 [2024-04-17 14:37:59.920456] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.359 [2024-04-17 14:37:59.920479] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.359 [2024-04-17 14:37:59.920487] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920491] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ecd300): datao=0, datal=3072, cccid=4 00:18:51.359 [2024-04-17 14:37:59.920497] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f15f40) on tqpair(0x1ecd300): expected_datao=0, payload_size=3072 00:18:51.359 [2024-04-17 14:37:59.920502] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920510] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.359 [2024-04-17 14:37:59.920514] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.360 [2024-04-17 14:37:59.920524] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.360 [2024-04-17 14:37:59.920531] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.360 [2024-04-17 14:37:59.920535] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.360 [2024-04-17 14:37:59.920539] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15f40) on tqpair=0x1ecd300 00:18:51.360 [2024-04-17 14:37:59.920553] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.360 [2024-04-17 14:37:59.920559] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ecd300) 00:18:51.360 [2024-04-17 14:37:59.920568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.360 [2024-04-17 14:37:59.920599] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15f40, cid 4, qid 0 00:18:51.360 [2024-04-17 14:37:59.920754] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.360 [2024-04-17 14:37:59.920789] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.360 [2024-04-17 14:37:59.920799] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.360 [2024-04-17 14:37:59.920805] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ecd300): datao=0, datal=8, cccid=4 00:18:51.360 [2024-04-17 14:37:59.920813] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f15f40) on tqpair(0x1ecd300): expected_datao=0, payload_size=8 00:18:51.360 [2024-04-17 14:37:59.920820] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.360 [2024-04-17 14:37:59.920831] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.360 [2024-04-17 14:37:59.920838] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.360 [2024-04-17 14:37:59.920871] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.360 [2024-04-17 14:37:59.920885] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.360 [2024-04-17 14:37:59.920891] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.360 [2024-04-17 14:37:59.920897] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15f40) on tqpair=0x1ecd300 00:18:51.360 ===================================================== 00:18:51.360 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:51.360 ===================================================== 00:18:51.360 Controller Capabilities/Features 00:18:51.360 ================================ 00:18:51.360 Vendor ID: 0000 00:18:51.360 Subsystem Vendor ID: 0000 00:18:51.360 Serial Number: .................... 00:18:51.360 Model Number: ........................................ 00:18:51.360 Firmware Version: 24.05 00:18:51.360 Recommended Arb Burst: 0 00:18:51.360 IEEE OUI Identifier: 00 00 00 00:18:51.360 Multi-path I/O 00:18:51.360 May have multiple subsystem ports: No 00:18:51.360 May have multiple controllers: No 00:18:51.360 Associated with SR-IOV VF: No 00:18:51.360 Max Data Transfer Size: 131072 00:18:51.360 Max Number of Namespaces: 0 00:18:51.360 Max Number of I/O Queues: 1024 00:18:51.360 NVMe Specification Version (VS): 1.3 00:18:51.360 NVMe Specification Version (Identify): 1.3 00:18:51.360 Maximum Queue Entries: 128 00:18:51.360 Contiguous Queues Required: Yes 00:18:51.360 Arbitration Mechanisms Supported 00:18:51.360 Weighted Round Robin: Not Supported 00:18:51.360 Vendor Specific: Not Supported 00:18:51.360 Reset Timeout: 15000 ms 00:18:51.360 Doorbell Stride: 4 bytes 00:18:51.360 NVM Subsystem Reset: Not Supported 00:18:51.360 Command Sets Supported 00:18:51.360 NVM Command Set: Supported 00:18:51.360 Boot Partition: Not Supported 00:18:51.360 Memory Page Size Minimum: 4096 bytes 00:18:51.360 Memory Page Size Maximum: 4096 bytes 00:18:51.360 Persistent Memory Region: Not Supported 00:18:51.360 Optional Asynchronous Events Supported 00:18:51.360 Namespace Attribute Notices: Not Supported 00:18:51.360 Firmware Activation Notices: Not Supported 00:18:51.360 ANA Change Notices: Not Supported 00:18:51.360 PLE Aggregate Log Change Notices: Not Supported 00:18:51.360 LBA Status Info Alert Notices: Not Supported 00:18:51.360 EGE Aggregate Log Change Notices: Not Supported 00:18:51.360 Normal NVM Subsystem Shutdown event: Not Supported 00:18:51.360 Zone Descriptor Change Notices: Not Supported 00:18:51.360 Discovery Log Change Notices: Supported 00:18:51.360 Controller Attributes 00:18:51.360 128-bit Host Identifier: Not Supported 00:18:51.360 Non-Operational Permissive Mode: Not Supported 00:18:51.360 NVM Sets: Not Supported 00:18:51.360 Read Recovery Levels: Not Supported 00:18:51.360 Endurance Groups: Not Supported 00:18:51.360 Predictable Latency Mode: Not Supported 00:18:51.360 Traffic Based Keep ALive: Not Supported 00:18:51.360 Namespace Granularity: Not Supported 00:18:51.360 SQ Associations: Not Supported 00:18:51.360 UUID List: Not Supported 00:18:51.360 Multi-Domain Subsystem: Not Supported 00:18:51.360 Fixed Capacity Management: Not Supported 00:18:51.360 Variable Capacity Management: Not Supported 00:18:51.360 Delete Endurance Group: Not Supported 00:18:51.360 Delete NVM Set: Not Supported 00:18:51.360 Extended LBA Formats Supported: Not Supported 00:18:51.360 Flexible Data Placement Supported: Not Supported 00:18:51.360 00:18:51.360 Controller Memory Buffer Support 00:18:51.360 ================================ 00:18:51.360 Supported: No 00:18:51.360 00:18:51.360 Persistent Memory Region Support 00:18:51.360 ================================ 00:18:51.360 Supported: No 00:18:51.360 00:18:51.360 Admin Command Set Attributes 00:18:51.360 ============================ 00:18:51.360 Security Send/Receive: Not Supported 00:18:51.360 Format NVM: Not Supported 00:18:51.360 Firmware Activate/Download: Not Supported 00:18:51.360 Namespace Management: Not Supported 00:18:51.360 Device Self-Test: Not Supported 00:18:51.360 Directives: Not Supported 00:18:51.360 NVMe-MI: Not Supported 00:18:51.360 Virtualization Management: Not Supported 00:18:51.360 Doorbell Buffer Config: Not Supported 00:18:51.360 Get LBA Status Capability: Not Supported 00:18:51.360 Command & Feature Lockdown Capability: Not Supported 00:18:51.360 Abort Command Limit: 1 00:18:51.360 Async Event Request Limit: 4 00:18:51.360 Number of Firmware Slots: N/A 00:18:51.360 Firmware Slot 1 Read-Only: N/A 00:18:51.360 Firmware Activation Without Reset: N/A 00:18:51.360 Multiple Update Detection Support: N/A 00:18:51.360 Firmware Update Granularity: No Information Provided 00:18:51.360 Per-Namespace SMART Log: No 00:18:51.360 Asymmetric Namespace Access Log Page: Not Supported 00:18:51.360 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:51.360 Command Effects Log Page: Not Supported 00:18:51.360 Get Log Page Extended Data: Supported 00:18:51.360 Telemetry Log Pages: Not Supported 00:18:51.360 Persistent Event Log Pages: Not Supported 00:18:51.360 Supported Log Pages Log Page: May Support 00:18:51.360 Commands Supported & Effects Log Page: Not Supported 00:18:51.360 Feature Identifiers & Effects Log Page:May Support 00:18:51.360 NVMe-MI Commands & Effects Log Page: May Support 00:18:51.360 Data Area 4 for Telemetry Log: Not Supported 00:18:51.360 Error Log Page Entries Supported: 128 00:18:51.360 Keep Alive: Not Supported 00:18:51.360 00:18:51.360 NVM Command Set Attributes 00:18:51.360 ========================== 00:18:51.360 Submission Queue Entry Size 00:18:51.360 Max: 1 00:18:51.360 Min: 1 00:18:51.360 Completion Queue Entry Size 00:18:51.360 Max: 1 00:18:51.360 Min: 1 00:18:51.360 Number of Namespaces: 0 00:18:51.360 Compare Command: Not Supported 00:18:51.360 Write Uncorrectable Command: Not Supported 00:18:51.360 Dataset Management Command: Not Supported 00:18:51.360 Write Zeroes Command: Not Supported 00:18:51.360 Set Features Save Field: Not Supported 00:18:51.360 Reservations: Not Supported 00:18:51.360 Timestamp: Not Supported 00:18:51.360 Copy: Not Supported 00:18:51.361 Volatile Write Cache: Not Present 00:18:51.361 Atomic Write Unit (Normal): 1 00:18:51.361 Atomic Write Unit (PFail): 1 00:18:51.361 Atomic Compare & Write Unit: 1 00:18:51.361 Fused Compare & Write: Supported 00:18:51.361 Scatter-Gather List 00:18:51.361 SGL Command Set: Supported 00:18:51.361 SGL Keyed: Supported 00:18:51.361 SGL Bit Bucket Descriptor: Not Supported 00:18:51.361 SGL Metadata Pointer: Not Supported 00:18:51.361 Oversized SGL: Not Supported 00:18:51.361 SGL Metadata Address: Not Supported 00:18:51.361 SGL Offset: Supported 00:18:51.361 Transport SGL Data Block: Not Supported 00:18:51.361 Replay Protected Memory Block: Not Supported 00:18:51.361 00:18:51.361 Firmware Slot Information 00:18:51.361 ========================= 00:18:51.361 Active slot: 0 00:18:51.361 00:18:51.361 00:18:51.361 Error Log 00:18:51.361 ========= 00:18:51.361 00:18:51.361 Active Namespaces 00:18:51.361 ================= 00:18:51.361 Discovery Log Page 00:18:51.361 ================== 00:18:51.361 Generation Counter: 2 00:18:51.361 Number of Records: 2 00:18:51.361 Record Format: 0 00:18:51.361 00:18:51.361 Discovery Log Entry 0 00:18:51.361 ---------------------- 00:18:51.361 Transport Type: 3 (TCP) 00:18:51.361 Address Family: 1 (IPv4) 00:18:51.361 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:51.361 Entry Flags: 00:18:51.361 Duplicate Returned Information: 1 00:18:51.361 Explicit Persistent Connection Support for Discovery: 1 00:18:51.361 Transport Requirements: 00:18:51.361 Secure Channel: Not Required 00:18:51.361 Port ID: 0 (0x0000) 00:18:51.361 Controller ID: 65535 (0xffff) 00:18:51.361 Admin Max SQ Size: 128 00:18:51.361 Transport Service Identifier: 4420 00:18:51.361 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:51.361 Transport Address: 10.0.0.2 00:18:51.361 Discovery Log Entry 1 00:18:51.361 ---------------------- 00:18:51.361 Transport Type: 3 (TCP) 00:18:51.361 Address Family: 1 (IPv4) 00:18:51.361 Subsystem Type: 2 (NVM Subsystem) 00:18:51.361 Entry Flags: 00:18:51.361 Duplicate Returned Information: 0 00:18:51.361 Explicit Persistent Connection Support for Discovery: 0 00:18:51.361 Transport Requirements: 00:18:51.361 Secure Channel: Not Required 00:18:51.361 Port ID: 0 (0x0000) 00:18:51.361 Controller ID: 65535 (0xffff) 00:18:51.361 Admin Max SQ Size: 128 00:18:51.361 Transport Service Identifier: 4420 00:18:51.361 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:51.361 Transport Address: 10.0.0.2 [2024-04-17 14:37:59.921098] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:51.361 [2024-04-17 14:37:59.921130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.361 [2024-04-17 14:37:59.921145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.361 [2024-04-17 14:37:59.921157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.361 [2024-04-17 14:37:59.921170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.361 [2024-04-17 14:37:59.921188] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.921198] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.921206] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.361 [2024-04-17 14:37:59.921219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.361 [2024-04-17 14:37:59.921264] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.361 [2024-04-17 14:37:59.921386] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.361 [2024-04-17 14:37:59.921402] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.361 [2024-04-17 14:37:59.921409] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.921416] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.361 [2024-04-17 14:37:59.921441] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.921451] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.921457] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.361 [2024-04-17 14:37:59.921470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.361 [2024-04-17 14:37:59.921510] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.361 [2024-04-17 14:37:59.921679] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.361 [2024-04-17 14:37:59.921711] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.361 [2024-04-17 14:37:59.921720] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.921727] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.361 [2024-04-17 14:37:59.921738] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:51.361 [2024-04-17 14:37:59.921746] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:51.361 [2024-04-17 14:37:59.921764] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.921774] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.921781] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.361 [2024-04-17 14:37:59.921794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.361 [2024-04-17 14:37:59.921833] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.361 [2024-04-17 14:37:59.921967] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.361 [2024-04-17 14:37:59.921989] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.361 [2024-04-17 14:37:59.921998] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.922006] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.361 [2024-04-17 14:37:59.922028] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.922038] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.922046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.361 [2024-04-17 14:37:59.922059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.361 [2024-04-17 14:37:59.922095] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.361 [2024-04-17 14:37:59.922233] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.361 [2024-04-17 14:37:59.922254] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.361 [2024-04-17 14:37:59.922261] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.922268] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.361 [2024-04-17 14:37:59.922288] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.922297] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.922304] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.361 [2024-04-17 14:37:59.922317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.361 [2024-04-17 14:37:59.922348] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.361 [2024-04-17 14:37:59.922493] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.361 [2024-04-17 14:37:59.922509] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.361 [2024-04-17 14:37:59.922515] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.361 [2024-04-17 14:37:59.922522] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.361 [2024-04-17 14:37:59.922543] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.922559] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.922567] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.362 [2024-04-17 14:37:59.922580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.362 [2024-04-17 14:37:59.922613] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.362 [2024-04-17 14:37:59.922749] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.362 [2024-04-17 14:37:59.922771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.362 [2024-04-17 14:37:59.922779] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.922786] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.362 [2024-04-17 14:37:59.922804] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.922810] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.922814] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.362 [2024-04-17 14:37:59.922823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.362 [2024-04-17 14:37:59.922849] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.362 [2024-04-17 14:37:59.922977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.362 [2024-04-17 14:37:59.922986] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.362 [2024-04-17 14:37:59.922990] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.922995] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.362 [2024-04-17 14:37:59.923008] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923013] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923017] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.362 [2024-04-17 14:37:59.923025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.362 [2024-04-17 14:37:59.923047] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.362 [2024-04-17 14:37:59.923170] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.362 [2024-04-17 14:37:59.923206] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.362 [2024-04-17 14:37:59.923219] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923227] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.362 [2024-04-17 14:37:59.923248] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923258] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923264] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.362 [2024-04-17 14:37:59.923276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.362 [2024-04-17 14:37:59.923313] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.362 [2024-04-17 14:37:59.923443] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.362 [2024-04-17 14:37:59.923463] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.362 [2024-04-17 14:37:59.923472] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923480] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.362 [2024-04-17 14:37:59.923503] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923511] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923518] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.362 [2024-04-17 14:37:59.923532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.362 [2024-04-17 14:37:59.923568] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.362 [2024-04-17 14:37:59.923693] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.362 [2024-04-17 14:37:59.923714] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.362 [2024-04-17 14:37:59.923724] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923732] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.362 [2024-04-17 14:37:59.923753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923769] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.923777] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.362 [2024-04-17 14:37:59.923791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.362 [2024-04-17 14:37:59.923827] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.362 [2024-04-17 14:37:59.927982] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.362 [2024-04-17 14:37:59.928020] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.362 [2024-04-17 14:37:59.928027] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.928032] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.362 [2024-04-17 14:37:59.928057] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.928064] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.928068] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ecd300) 00:18:51.362 [2024-04-17 14:37:59.928080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.362 [2024-04-17 14:37:59.928120] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f15de0, cid 3, qid 0 00:18:51.362 [2024-04-17 14:37:59.928251] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.362 [2024-04-17 14:37:59.928264] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.362 [2024-04-17 14:37:59.928269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.362 [2024-04-17 14:37:59.928273] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f15de0) on tqpair=0x1ecd300 00:18:51.362 [2024-04-17 14:37:59.928284] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:18:51.362 00:18:51.362 14:37:59 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:51.626 [2024-04-17 14:37:59.967105] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:51.626 [2024-04-17 14:37:59.967168] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:18:51.626 [2024-04-17 14:38:00.111923] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:51.626 [2024-04-17 14:38:00.116075] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:51.626 [2024-04-17 14:38:00.116096] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:51.626 [2024-04-17 14:38:00.116114] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:51.626 [2024-04-17 14:38:00.116133] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:18:51.626 [2024-04-17 14:38:00.116306] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:51.626 [2024-04-17 14:38:00.116361] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa79300 0 00:18:51.626 [2024-04-17 14:38:00.124005] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:51.626 [2024-04-17 14:38:00.124060] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:51.626 [2024-04-17 14:38:00.124067] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:51.626 [2024-04-17 14:38:00.124071] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:51.626 [2024-04-17 14:38:00.124128] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.626 [2024-04-17 14:38:00.124136] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.626 [2024-04-17 14:38:00.124141] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.626 [2024-04-17 14:38:00.124161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:51.626 [2024-04-17 14:38:00.124207] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.626 [2024-04-17 14:38:00.131982] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.626 [2024-04-17 14:38:00.132033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.626 [2024-04-17 14:38:00.132039] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.626 [2024-04-17 14:38:00.132045] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.626 [2024-04-17 14:38:00.132061] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:51.626 [2024-04-17 14:38:00.132076] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:51.626 [2024-04-17 14:38:00.132084] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:51.626 [2024-04-17 14:38:00.132114] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.626 [2024-04-17 14:38:00.132121] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.626 [2024-04-17 14:38:00.132125] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.626 [2024-04-17 14:38:00.132140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.626 [2024-04-17 14:38:00.132190] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.626 [2024-04-17 14:38:00.132308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.626 [2024-04-17 14:38:00.132320] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.626 [2024-04-17 14:38:00.132324] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.626 [2024-04-17 14:38:00.132328] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.626 [2024-04-17 14:38:00.132340] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:51.626 [2024-04-17 14:38:00.132350] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:51.626 [2024-04-17 14:38:00.132359] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.626 [2024-04-17 14:38:00.132363] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.626 [2024-04-17 14:38:00.132367] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.626 [2024-04-17 14:38:00.132377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.626 [2024-04-17 14:38:00.132401] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.626 [2024-04-17 14:38:00.132482] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.626 [2024-04-17 14:38:00.132489] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.626 [2024-04-17 14:38:00.132493] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132498] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.627 [2024-04-17 14:38:00.132504] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:51.627 [2024-04-17 14:38:00.132514] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:51.627 [2024-04-17 14:38:00.132522] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132526] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132531] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.627 [2024-04-17 14:38:00.132538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.627 [2024-04-17 14:38:00.132557] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.627 [2024-04-17 14:38:00.132637] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.627 [2024-04-17 14:38:00.132644] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.627 [2024-04-17 14:38:00.132648] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132653] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.627 [2024-04-17 14:38:00.132659] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:51.627 [2024-04-17 14:38:00.132670] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132675] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132679] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.627 [2024-04-17 14:38:00.132687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.627 [2024-04-17 14:38:00.132705] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.627 [2024-04-17 14:38:00.132780] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.627 [2024-04-17 14:38:00.132797] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.627 [2024-04-17 14:38:00.132801] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132806] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.627 [2024-04-17 14:38:00.132811] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:51.627 [2024-04-17 14:38:00.132817] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:51.627 [2024-04-17 14:38:00.132826] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:51.627 [2024-04-17 14:38:00.132941] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:51.627 [2024-04-17 14:38:00.132966] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:51.627 [2024-04-17 14:38:00.132979] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132983] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.132988] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.627 [2024-04-17 14:38:00.132996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.627 [2024-04-17 14:38:00.133020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.627 [2024-04-17 14:38:00.133105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.627 [2024-04-17 14:38:00.133112] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.627 [2024-04-17 14:38:00.133116] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133120] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.627 [2024-04-17 14:38:00.133126] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:51.627 [2024-04-17 14:38:00.133137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133146] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.627 [2024-04-17 14:38:00.133154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.627 [2024-04-17 14:38:00.133172] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.627 [2024-04-17 14:38:00.133262] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.627 [2024-04-17 14:38:00.133272] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.627 [2024-04-17 14:38:00.133276] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133281] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.627 [2024-04-17 14:38:00.133286] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:51.627 [2024-04-17 14:38:00.133292] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:51.627 [2024-04-17 14:38:00.133302] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:51.627 [2024-04-17 14:38:00.133313] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:51.627 [2024-04-17 14:38:00.133328] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133333] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.627 [2024-04-17 14:38:00.133341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.627 [2024-04-17 14:38:00.133364] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.627 [2024-04-17 14:38:00.133499] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.627 [2024-04-17 14:38:00.133515] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.627 [2024-04-17 14:38:00.133520] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133525] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa79300): datao=0, datal=4096, cccid=0 00:18:51.627 [2024-04-17 14:38:00.133530] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac19c0) on tqpair(0xa79300): expected_datao=0, payload_size=4096 00:18:51.627 [2024-04-17 14:38:00.133535] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133545] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133550] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133559] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.627 [2024-04-17 14:38:00.133566] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.627 [2024-04-17 14:38:00.133570] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133574] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.627 [2024-04-17 14:38:00.133585] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:51.627 [2024-04-17 14:38:00.133590] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:51.627 [2024-04-17 14:38:00.133595] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:51.627 [2024-04-17 14:38:00.133605] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:51.627 [2024-04-17 14:38:00.133611] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:51.627 [2024-04-17 14:38:00.133617] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:51.627 [2024-04-17 14:38:00.133627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:51.627 [2024-04-17 14:38:00.133635] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133640] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133644] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.627 [2024-04-17 14:38:00.133653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:51.627 [2024-04-17 14:38:00.133675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.627 [2024-04-17 14:38:00.133756] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.627 [2024-04-17 14:38:00.133763] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.627 [2024-04-17 14:38:00.133767] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133771] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac19c0) on tqpair=0xa79300 00:18:51.627 [2024-04-17 14:38:00.133780] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133784] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa79300) 00:18:51.627 [2024-04-17 14:38:00.133795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.627 [2024-04-17 14:38:00.133802] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.627 [2024-04-17 14:38:00.133806] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.133810] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa79300) 00:18:51.628 [2024-04-17 14:38:00.133817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.628 [2024-04-17 14:38:00.133824] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.133828] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.133832] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa79300) 00:18:51.628 [2024-04-17 14:38:00.133838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.628 [2024-04-17 14:38:00.133845] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.133850] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.133853] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.628 [2024-04-17 14:38:00.133860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.628 [2024-04-17 14:38:00.133865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.133879] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.133887] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.133892] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa79300) 00:18:51.628 [2024-04-17 14:38:00.133899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.628 [2024-04-17 14:38:00.133920] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac19c0, cid 0, qid 0 00:18:51.628 [2024-04-17 14:38:00.133928] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1b20, cid 1, qid 0 00:18:51.628 [2024-04-17 14:38:00.133933] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1c80, cid 2, qid 0 00:18:51.628 [2024-04-17 14:38:00.133938] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.628 [2024-04-17 14:38:00.133943] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1f40, cid 4, qid 0 00:18:51.628 [2024-04-17 14:38:00.134106] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.628 [2024-04-17 14:38:00.134123] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.628 [2024-04-17 14:38:00.134127] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134132] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1f40) on tqpair=0xa79300 00:18:51.628 [2024-04-17 14:38:00.134139] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:51.628 [2024-04-17 14:38:00.134145] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134162] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134169] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134178] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa79300) 00:18:51.628 [2024-04-17 14:38:00.134186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:51.628 [2024-04-17 14:38:00.134209] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1f40, cid 4, qid 0 00:18:51.628 [2024-04-17 14:38:00.134296] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.628 [2024-04-17 14:38:00.134306] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.628 [2024-04-17 14:38:00.134310] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134315] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1f40) on tqpair=0xa79300 00:18:51.628 [2024-04-17 14:38:00.134373] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134391] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134403] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134407] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa79300) 00:18:51.628 [2024-04-17 14:38:00.134416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.628 [2024-04-17 14:38:00.134439] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1f40, cid 4, qid 0 00:18:51.628 [2024-04-17 14:38:00.134543] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.628 [2024-04-17 14:38:00.134551] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.628 [2024-04-17 14:38:00.134555] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134560] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa79300): datao=0, datal=4096, cccid=4 00:18:51.628 [2024-04-17 14:38:00.134565] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac1f40) on tqpair(0xa79300): expected_datao=0, payload_size=4096 00:18:51.628 [2024-04-17 14:38:00.134570] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134579] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134584] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.628 [2024-04-17 14:38:00.134601] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.628 [2024-04-17 14:38:00.134604] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134609] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1f40) on tqpair=0xa79300 00:18:51.628 [2024-04-17 14:38:00.134621] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:51.628 [2024-04-17 14:38:00.134640] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134660] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134674] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134681] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa79300) 00:18:51.628 [2024-04-17 14:38:00.134693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.628 [2024-04-17 14:38:00.134727] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1f40, cid 4, qid 0 00:18:51.628 [2024-04-17 14:38:00.134836] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.628 [2024-04-17 14:38:00.134857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.628 [2024-04-17 14:38:00.134862] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134866] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa79300): datao=0, datal=4096, cccid=4 00:18:51.628 [2024-04-17 14:38:00.134871] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac1f40) on tqpair(0xa79300): expected_datao=0, payload_size=4096 00:18:51.628 [2024-04-17 14:38:00.134876] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134884] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134889] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134898] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.628 [2024-04-17 14:38:00.134904] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.628 [2024-04-17 14:38:00.134908] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134912] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1f40) on tqpair=0xa79300 00:18:51.628 [2024-04-17 14:38:00.134932] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134945] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:51.628 [2024-04-17 14:38:00.134971] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.134979] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa79300) 00:18:51.628 [2024-04-17 14:38:00.134990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.628 [2024-04-17 14:38:00.135029] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1f40, cid 4, qid 0 00:18:51.628 [2024-04-17 14:38:00.135118] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.628 [2024-04-17 14:38:00.135126] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.628 [2024-04-17 14:38:00.135130] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.135134] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa79300): datao=0, datal=4096, cccid=4 00:18:51.628 [2024-04-17 14:38:00.135139] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac1f40) on tqpair(0xa79300): expected_datao=0, payload_size=4096 00:18:51.628 [2024-04-17 14:38:00.135144] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.135152] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.135156] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.628 [2024-04-17 14:38:00.135166] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.629 [2024-04-17 14:38:00.135172] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.629 [2024-04-17 14:38:00.135176] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135181] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1f40) on tqpair=0xa79300 00:18:51.629 [2024-04-17 14:38:00.135192] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:51.629 [2024-04-17 14:38:00.135201] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:51.629 [2024-04-17 14:38:00.135213] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:51.629 [2024-04-17 14:38:00.135221] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:51.629 [2024-04-17 14:38:00.135228] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:51.629 [2024-04-17 14:38:00.135237] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:51.629 [2024-04-17 14:38:00.135246] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:51.629 [2024-04-17 14:38:00.135255] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:51.629 [2024-04-17 14:38:00.135284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135290] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.135299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.629 [2024-04-17 14:38:00.135307] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135311] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135315] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.135322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.629 [2024-04-17 14:38:00.135351] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1f40, cid 4, qid 0 00:18:51.629 [2024-04-17 14:38:00.135360] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac20a0, cid 5, qid 0 00:18:51.629 [2024-04-17 14:38:00.135466] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.629 [2024-04-17 14:38:00.135487] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.629 [2024-04-17 14:38:00.135491] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135496] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1f40) on tqpair=0xa79300 00:18:51.629 [2024-04-17 14:38:00.135503] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.629 [2024-04-17 14:38:00.135510] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.629 [2024-04-17 14:38:00.135514] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135518] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac20a0) on tqpair=0xa79300 00:18:51.629 [2024-04-17 14:38:00.135530] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135535] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.135543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.629 [2024-04-17 14:38:00.135563] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac20a0, cid 5, qid 0 00:18:51.629 [2024-04-17 14:38:00.135632] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.629 [2024-04-17 14:38:00.135643] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.629 [2024-04-17 14:38:00.135648] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac20a0) on tqpair=0xa79300 00:18:51.629 [2024-04-17 14:38:00.135670] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135675] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.135682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.629 [2024-04-17 14:38:00.135700] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac20a0, cid 5, qid 0 00:18:51.629 [2024-04-17 14:38:00.135773] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.629 [2024-04-17 14:38:00.135779] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.629 [2024-04-17 14:38:00.135783] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135788] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac20a0) on tqpair=0xa79300 00:18:51.629 [2024-04-17 14:38:00.135798] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135803] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.135811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.629 [2024-04-17 14:38:00.135827] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac20a0, cid 5, qid 0 00:18:51.629 [2024-04-17 14:38:00.135908] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.629 [2024-04-17 14:38:00.135915] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.629 [2024-04-17 14:38:00.135919] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135923] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac20a0) on tqpair=0xa79300 00:18:51.629 [2024-04-17 14:38:00.135939] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.135944] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.140008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.629 [2024-04-17 14:38:00.140045] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140050] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.140058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.629 [2024-04-17 14:38:00.140066] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140070] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.140077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.629 [2024-04-17 14:38:00.140086] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140090] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa79300) 00:18:51.629 [2024-04-17 14:38:00.140097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.629 [2024-04-17 14:38:00.140146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac20a0, cid 5, qid 0 00:18:51.629 [2024-04-17 14:38:00.140156] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1f40, cid 4, qid 0 00:18:51.629 [2024-04-17 14:38:00.140161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac2200, cid 6, qid 0 00:18:51.629 [2024-04-17 14:38:00.140166] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac2360, cid 7, qid 0 00:18:51.629 [2024-04-17 14:38:00.140437] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.629 [2024-04-17 14:38:00.140448] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.629 [2024-04-17 14:38:00.140453] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140457] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa79300): datao=0, datal=8192, cccid=5 00:18:51.629 [2024-04-17 14:38:00.140463] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac20a0) on tqpair(0xa79300): expected_datao=0, payload_size=8192 00:18:51.629 [2024-04-17 14:38:00.140468] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140487] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140492] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140499] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.629 [2024-04-17 14:38:00.140505] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.629 [2024-04-17 14:38:00.140509] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140513] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa79300): datao=0, datal=512, cccid=4 00:18:51.629 [2024-04-17 14:38:00.140518] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac1f40) on tqpair(0xa79300): expected_datao=0, payload_size=512 00:18:51.629 [2024-04-17 14:38:00.140523] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140530] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140534] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140540] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.629 [2024-04-17 14:38:00.140546] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.629 [2024-04-17 14:38:00.140550] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140554] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa79300): datao=0, datal=512, cccid=6 00:18:51.629 [2024-04-17 14:38:00.140558] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac2200) on tqpair(0xa79300): expected_datao=0, payload_size=512 00:18:51.629 [2024-04-17 14:38:00.140563] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140569] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.629 [2024-04-17 14:38:00.140574] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140580] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:51.630 [2024-04-17 14:38:00.140586] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:51.630 [2024-04-17 14:38:00.140589] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140593] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa79300): datao=0, datal=4096, cccid=7 00:18:51.630 [2024-04-17 14:38:00.140598] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xac2360) on tqpair(0xa79300): expected_datao=0, payload_size=4096 00:18:51.630 [2024-04-17 14:38:00.140603] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140610] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140614] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140623] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.630 [2024-04-17 14:38:00.140629] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.630 [2024-04-17 14:38:00.140633] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140638] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac20a0) on tqpair=0xa79300 00:18:51.630 [2024-04-17 14:38:00.140663] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.630 [2024-04-17 14:38:00.140670] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.630 [2024-04-17 14:38:00.140674] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140679] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1f40) on tqpair=0xa79300 00:18:51.630 [2024-04-17 14:38:00.140689] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.630 [2024-04-17 14:38:00.140696] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.630 [2024-04-17 14:38:00.140700] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140704] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac2200) on tqpair=0xa79300 00:18:51.630 [2024-04-17 14:38:00.140712] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.630 [2024-04-17 14:38:00.140718] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.630 [2024-04-17 14:38:00.140722] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.630 [2024-04-17 14:38:00.140726] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac2360) on tqpair=0xa79300 00:18:51.630 ===================================================== 00:18:51.630 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:51.630 ===================================================== 00:18:51.630 Controller Capabilities/Features 00:18:51.630 ================================ 00:18:51.630 Vendor ID: 8086 00:18:51.630 Subsystem Vendor ID: 8086 00:18:51.630 Serial Number: SPDK00000000000001 00:18:51.630 Model Number: SPDK bdev Controller 00:18:51.630 Firmware Version: 24.05 00:18:51.630 Recommended Arb Burst: 6 00:18:51.630 IEEE OUI Identifier: e4 d2 5c 00:18:51.630 Multi-path I/O 00:18:51.630 May have multiple subsystem ports: Yes 00:18:51.630 May have multiple controllers: Yes 00:18:51.630 Associated with SR-IOV VF: No 00:18:51.630 Max Data Transfer Size: 131072 00:18:51.630 Max Number of Namespaces: 32 00:18:51.630 Max Number of I/O Queues: 127 00:18:51.630 NVMe Specification Version (VS): 1.3 00:18:51.630 NVMe Specification Version (Identify): 1.3 00:18:51.630 Maximum Queue Entries: 128 00:18:51.630 Contiguous Queues Required: Yes 00:18:51.630 Arbitration Mechanisms Supported 00:18:51.630 Weighted Round Robin: Not Supported 00:18:51.630 Vendor Specific: Not Supported 00:18:51.630 Reset Timeout: 15000 ms 00:18:51.630 Doorbell Stride: 4 bytes 00:18:51.630 NVM Subsystem Reset: Not Supported 00:18:51.630 Command Sets Supported 00:18:51.630 NVM Command Set: Supported 00:18:51.630 Boot Partition: Not Supported 00:18:51.630 Memory Page Size Minimum: 4096 bytes 00:18:51.630 Memory Page Size Maximum: 4096 bytes 00:18:51.630 Persistent Memory Region: Not Supported 00:18:51.630 Optional Asynchronous Events Supported 00:18:51.630 Namespace Attribute Notices: Supported 00:18:51.630 Firmware Activation Notices: Not Supported 00:18:51.630 ANA Change Notices: Not Supported 00:18:51.630 PLE Aggregate Log Change Notices: Not Supported 00:18:51.630 LBA Status Info Alert Notices: Not Supported 00:18:51.630 EGE Aggregate Log Change Notices: Not Supported 00:18:51.630 Normal NVM Subsystem Shutdown event: Not Supported 00:18:51.630 Zone Descriptor Change Notices: Not Supported 00:18:51.630 Discovery Log Change Notices: Not Supported 00:18:51.630 Controller Attributes 00:18:51.630 128-bit Host Identifier: Supported 00:18:51.630 Non-Operational Permissive Mode: Not Supported 00:18:51.630 NVM Sets: Not Supported 00:18:51.630 Read Recovery Levels: Not Supported 00:18:51.630 Endurance Groups: Not Supported 00:18:51.630 Predictable Latency Mode: Not Supported 00:18:51.630 Traffic Based Keep ALive: Not Supported 00:18:51.630 Namespace Granularity: Not Supported 00:18:51.630 SQ Associations: Not Supported 00:18:51.630 UUID List: Not Supported 00:18:51.630 Multi-Domain Subsystem: Not Supported 00:18:51.630 Fixed Capacity Management: Not Supported 00:18:51.630 Variable Capacity Management: Not Supported 00:18:51.630 Delete Endurance Group: Not Supported 00:18:51.630 Delete NVM Set: Not Supported 00:18:51.630 Extended LBA Formats Supported: Not Supported 00:18:51.630 Flexible Data Placement Supported: Not Supported 00:18:51.630 00:18:51.630 Controller Memory Buffer Support 00:18:51.630 ================================ 00:18:51.630 Supported: No 00:18:51.630 00:18:51.630 Persistent Memory Region Support 00:18:51.630 ================================ 00:18:51.630 Supported: No 00:18:51.630 00:18:51.630 Admin Command Set Attributes 00:18:51.630 ============================ 00:18:51.630 Security Send/Receive: Not Supported 00:18:51.630 Format NVM: Not Supported 00:18:51.630 Firmware Activate/Download: Not Supported 00:18:51.630 Namespace Management: Not Supported 00:18:51.630 Device Self-Test: Not Supported 00:18:51.630 Directives: Not Supported 00:18:51.630 NVMe-MI: Not Supported 00:18:51.630 Virtualization Management: Not Supported 00:18:51.630 Doorbell Buffer Config: Not Supported 00:18:51.630 Get LBA Status Capability: Not Supported 00:18:51.630 Command & Feature Lockdown Capability: Not Supported 00:18:51.630 Abort Command Limit: 4 00:18:51.630 Async Event Request Limit: 4 00:18:51.630 Number of Firmware Slots: N/A 00:18:51.630 Firmware Slot 1 Read-Only: N/A 00:18:51.630 Firmware Activation Without Reset: N/A 00:18:51.630 Multiple Update Detection Support: N/A 00:18:51.630 Firmware Update Granularity: No Information Provided 00:18:51.630 Per-Namespace SMART Log: No 00:18:51.630 Asymmetric Namespace Access Log Page: Not Supported 00:18:51.630 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:51.630 Command Effects Log Page: Supported 00:18:51.630 Get Log Page Extended Data: Supported 00:18:51.630 Telemetry Log Pages: Not Supported 00:18:51.630 Persistent Event Log Pages: Not Supported 00:18:51.630 Supported Log Pages Log Page: May Support 00:18:51.630 Commands Supported & Effects Log Page: Not Supported 00:18:51.630 Feature Identifiers & Effects Log Page:May Support 00:18:51.630 NVMe-MI Commands & Effects Log Page: May Support 00:18:51.630 Data Area 4 for Telemetry Log: Not Supported 00:18:51.630 Error Log Page Entries Supported: 128 00:18:51.630 Keep Alive: Supported 00:18:51.630 Keep Alive Granularity: 10000 ms 00:18:51.630 00:18:51.630 NVM Command Set Attributes 00:18:51.630 ========================== 00:18:51.630 Submission Queue Entry Size 00:18:51.630 Max: 64 00:18:51.630 Min: 64 00:18:51.630 Completion Queue Entry Size 00:18:51.630 Max: 16 00:18:51.630 Min: 16 00:18:51.630 Number of Namespaces: 32 00:18:51.630 Compare Command: Supported 00:18:51.630 Write Uncorrectable Command: Not Supported 00:18:51.630 Dataset Management Command: Supported 00:18:51.630 Write Zeroes Command: Supported 00:18:51.630 Set Features Save Field: Not Supported 00:18:51.630 Reservations: Supported 00:18:51.630 Timestamp: Not Supported 00:18:51.631 Copy: Supported 00:18:51.631 Volatile Write Cache: Present 00:18:51.631 Atomic Write Unit (Normal): 1 00:18:51.631 Atomic Write Unit (PFail): 1 00:18:51.631 Atomic Compare & Write Unit: 1 00:18:51.631 Fused Compare & Write: Supported 00:18:51.631 Scatter-Gather List 00:18:51.631 SGL Command Set: Supported 00:18:51.631 SGL Keyed: Supported 00:18:51.631 SGL Bit Bucket Descriptor: Not Supported 00:18:51.631 SGL Metadata Pointer: Not Supported 00:18:51.631 Oversized SGL: Not Supported 00:18:51.631 SGL Metadata Address: Not Supported 00:18:51.631 SGL Offset: Supported 00:18:51.631 Transport SGL Data Block: Not Supported 00:18:51.631 Replay Protected Memory Block: Not Supported 00:18:51.631 00:18:51.631 Firmware Slot Information 00:18:51.631 ========================= 00:18:51.631 Active slot: 1 00:18:51.631 Slot 1 Firmware Revision: 24.05 00:18:51.631 00:18:51.631 00:18:51.631 Commands Supported and Effects 00:18:51.631 ============================== 00:18:51.631 Admin Commands 00:18:51.631 -------------- 00:18:51.631 Get Log Page (02h): Supported 00:18:51.631 Identify (06h): Supported 00:18:51.631 Abort (08h): Supported 00:18:51.631 Set Features (09h): Supported 00:18:51.631 Get Features (0Ah): Supported 00:18:51.631 Asynchronous Event Request (0Ch): Supported 00:18:51.631 Keep Alive (18h): Supported 00:18:51.631 I/O Commands 00:18:51.631 ------------ 00:18:51.631 Flush (00h): Supported LBA-Change 00:18:51.631 Write (01h): Supported LBA-Change 00:18:51.631 Read (02h): Supported 00:18:51.631 Compare (05h): Supported 00:18:51.631 Write Zeroes (08h): Supported LBA-Change 00:18:51.631 Dataset Management (09h): Supported LBA-Change 00:18:51.631 Copy (19h): Supported LBA-Change 00:18:51.631 Unknown (79h): Supported LBA-Change 00:18:51.631 Unknown (7Ah): Supported 00:18:51.631 00:18:51.631 Error Log 00:18:51.631 ========= 00:18:51.631 00:18:51.631 Arbitration 00:18:51.631 =========== 00:18:51.631 Arbitration Burst: 1 00:18:51.631 00:18:51.631 Power Management 00:18:51.631 ================ 00:18:51.631 Number of Power States: 1 00:18:51.631 Current Power State: Power State #0 00:18:51.631 Power State #0: 00:18:51.631 Max Power: 0.00 W 00:18:51.631 Non-Operational State: Operational 00:18:51.631 Entry Latency: Not Reported 00:18:51.631 Exit Latency: Not Reported 00:18:51.631 Relative Read Throughput: 0 00:18:51.631 Relative Read Latency: 0 00:18:51.631 Relative Write Throughput: 0 00:18:51.631 Relative Write Latency: 0 00:18:51.631 Idle Power: Not Reported 00:18:51.631 Active Power: Not Reported 00:18:51.631 Non-Operational Permissive Mode: Not Supported 00:18:51.631 00:18:51.631 Health Information 00:18:51.631 ================== 00:18:51.631 Critical Warnings: 00:18:51.631 Available Spare Space: OK 00:18:51.631 Temperature: OK 00:18:51.631 Device Reliability: OK 00:18:51.631 Read Only: No 00:18:51.631 Volatile Memory Backup: OK 00:18:51.631 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:51.631 Temperature Threshold: [2024-04-17 14:38:00.140846] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.631 [2024-04-17 14:38:00.140853] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa79300) 00:18:51.631 [2024-04-17 14:38:00.140863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.631 [2024-04-17 14:38:00.140889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac2360, cid 7, qid 0 00:18:51.631 [2024-04-17 14:38:00.141011] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.631 [2024-04-17 14:38:00.141021] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.631 [2024-04-17 14:38:00.141026] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.631 [2024-04-17 14:38:00.141030] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac2360) on tqpair=0xa79300 00:18:51.631 [2024-04-17 14:38:00.141075] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:51.631 [2024-04-17 14:38:00.141092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.631 [2024-04-17 14:38:00.141099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.631 [2024-04-17 14:38:00.141106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.631 [2024-04-17 14:38:00.141114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.631 [2024-04-17 14:38:00.141125] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.631 [2024-04-17 14:38:00.141129] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.631 [2024-04-17 14:38:00.141133] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.631 [2024-04-17 14:38:00.141142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.631 [2024-04-17 14:38:00.141168] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.631 [2024-04-17 14:38:00.141256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.631 [2024-04-17 14:38:00.141269] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.631 [2024-04-17 14:38:00.141276] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.631 [2024-04-17 14:38:00.141282] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.631 [2024-04-17 14:38:00.141302] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.631 [2024-04-17 14:38:00.141316] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.631 [2024-04-17 14:38:00.141323] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.631 [2024-04-17 14:38:00.141335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.631 [2024-04-17 14:38:00.141369] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.631 [2024-04-17 14:38:00.141480] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.631 [2024-04-17 14:38:00.141487] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.631 [2024-04-17 14:38:00.141491] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.631 [2024-04-17 14:38:00.141496] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.631 [2024-04-17 14:38:00.141502] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:51.631 [2024-04-17 14:38:00.141507] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:51.631 [2024-04-17 14:38:00.141518] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141523] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141527] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.141535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.141552] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.141629] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.141661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.141667] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.141685] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141691] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141695] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.141703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.141726] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.141809] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.141821] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.141825] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141830] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.141841] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141846] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141850] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.141858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.141876] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.141965] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.141974] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.141978] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141982] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.141994] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.141999] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142003] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.142011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.142031] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.142104] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.142111] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.142115] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142119] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.142141] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142145] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142149] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.142157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.142174] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.142249] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.142273] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.142281] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142288] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.142303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142308] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142312] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.142321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.142346] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.142413] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.142420] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.142424] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142428] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.142439] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142443] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142448] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.142455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.142473] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.142551] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.142567] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.142571] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142576] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.142588] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142593] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142597] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.142605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.142624] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.142691] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.142698] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.142702] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142706] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.142717] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142721] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142725] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.142733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.142750] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.142818] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.142825] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.142829] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142833] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.142844] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142849] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142853] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.142860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.142877] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.142960] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.142969] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.142973] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142977] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.632 [2024-04-17 14:38:00.142988] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.632 [2024-04-17 14:38:00.142997] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.632 [2024-04-17 14:38:00.143005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.632 [2024-04-17 14:38:00.143025] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.632 [2024-04-17 14:38:00.143094] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.632 [2024-04-17 14:38:00.143105] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.632 [2024-04-17 14:38:00.143110] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143115] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.633 [2024-04-17 14:38:00.143126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143131] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143135] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.633 [2024-04-17 14:38:00.143142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.633 [2024-04-17 14:38:00.143161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.633 [2024-04-17 14:38:00.143245] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.633 [2024-04-17 14:38:00.143258] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.633 [2024-04-17 14:38:00.143265] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143272] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.633 [2024-04-17 14:38:00.143284] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143289] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143293] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.633 [2024-04-17 14:38:00.143302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.633 [2024-04-17 14:38:00.143323] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.633 [2024-04-17 14:38:00.143390] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.633 [2024-04-17 14:38:00.143397] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.633 [2024-04-17 14:38:00.143401] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143406] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.633 [2024-04-17 14:38:00.143416] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143421] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143425] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.633 [2024-04-17 14:38:00.143433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.633 [2024-04-17 14:38:00.143450] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.633 [2024-04-17 14:38:00.143515] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.633 [2024-04-17 14:38:00.143527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.633 [2024-04-17 14:38:00.143532] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143536] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.633 [2024-04-17 14:38:00.143548] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143552] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143557] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.633 [2024-04-17 14:38:00.143564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.633 [2024-04-17 14:38:00.143583] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.633 [2024-04-17 14:38:00.143654] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.633 [2024-04-17 14:38:00.143661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.633 [2024-04-17 14:38:00.143665] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143669] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.633 [2024-04-17 14:38:00.143680] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.633 [2024-04-17 14:38:00.143696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.633 [2024-04-17 14:38:00.143714] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.633 [2024-04-17 14:38:00.143781] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.633 [2024-04-17 14:38:00.143788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.633 [2024-04-17 14:38:00.143792] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143796] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.633 [2024-04-17 14:38:00.143807] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143812] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143816] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.633 [2024-04-17 14:38:00.143823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.633 [2024-04-17 14:38:00.143840] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.633 [2024-04-17 14:38:00.143919] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.633 [2024-04-17 14:38:00.143930] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.633 [2024-04-17 14:38:00.143934] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.143939] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.633 [2024-04-17 14:38:00.147984] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.148021] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.148027] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa79300) 00:18:51.633 [2024-04-17 14:38:00.148041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.633 [2024-04-17 14:38:00.148083] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xac1de0, cid 3, qid 0 00:18:51.633 [2024-04-17 14:38:00.148201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:51.633 [2024-04-17 14:38:00.148209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:51.633 [2024-04-17 14:38:00.148214] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:51.633 [2024-04-17 14:38:00.148219] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xac1de0) on tqpair=0xa79300 00:18:51.633 [2024-04-17 14:38:00.148229] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:18:51.633 0 Kelvin (-273 Celsius) 00:18:51.633 Available Spare: 0% 00:18:51.633 Available Spare Threshold: 0% 00:18:51.633 Life Percentage Used: 0% 00:18:51.633 Data Units Read: 0 00:18:51.633 Data Units Written: 0 00:18:51.633 Host Read Commands: 0 00:18:51.633 Host Write Commands: 0 00:18:51.633 Controller Busy Time: 0 minutes 00:18:51.633 Power Cycles: 0 00:18:51.633 Power On Hours: 0 hours 00:18:51.633 Unsafe Shutdowns: 0 00:18:51.633 Unrecoverable Media Errors: 0 00:18:51.633 Lifetime Error Log Entries: 0 00:18:51.633 Warning Temperature Time: 0 minutes 00:18:51.633 Critical Temperature Time: 0 minutes 00:18:51.633 00:18:51.633 Number of Queues 00:18:51.633 ================ 00:18:51.633 Number of I/O Submission Queues: 127 00:18:51.633 Number of I/O Completion Queues: 127 00:18:51.633 00:18:51.633 Active Namespaces 00:18:51.633 ================= 00:18:51.633 Namespace ID:1 00:18:51.633 Error Recovery Timeout: Unlimited 00:18:51.633 Command Set Identifier: NVM (00h) 00:18:51.633 Deallocate: Supported 00:18:51.633 Deallocated/Unwritten Error: Not Supported 00:18:51.633 Deallocated Read Value: Unknown 00:18:51.633 Deallocate in Write Zeroes: Not Supported 00:18:51.633 Deallocated Guard Field: 0xFFFF 00:18:51.633 Flush: Supported 00:18:51.633 Reservation: Supported 00:18:51.633 Namespace Sharing Capabilities: Multiple Controllers 00:18:51.633 Size (in LBAs): 131072 (0GiB) 00:18:51.633 Capacity (in LBAs): 131072 (0GiB) 00:18:51.633 Utilization (in LBAs): 131072 (0GiB) 00:18:51.633 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:51.633 EUI64: ABCDEF0123456789 00:18:51.633 UUID: d26c2a09-0d16-4dc6-b886-5ba841c3d095 00:18:51.633 Thin Provisioning: Not Supported 00:18:51.633 Per-NS Atomic Units: Yes 00:18:51.633 Atomic Boundary Size (Normal): 0 00:18:51.633 Atomic Boundary Size (PFail): 0 00:18:51.633 Atomic Boundary Offset: 0 00:18:51.633 Maximum Single Source Range Length: 65535 00:18:51.633 Maximum Copy Length: 65535 00:18:51.633 Maximum Source Range Count: 1 00:18:51.633 NGUID/EUI64 Never Reused: No 00:18:51.633 Namespace Write Protected: No 00:18:51.633 Number of LBA Formats: 1 00:18:51.633 Current LBA Format: LBA Format #00 00:18:51.634 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:51.634 00:18:51.634 14:38:00 -- host/identify.sh@51 -- # sync 00:18:51.634 14:38:00 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.634 14:38:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.634 14:38:00 -- common/autotest_common.sh@10 -- # set +x 00:18:51.634 14:38:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.634 14:38:00 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:51.634 14:38:00 -- host/identify.sh@56 -- # nvmftestfini 00:18:51.634 14:38:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:51.634 14:38:00 -- nvmf/common.sh@117 -- # sync 00:18:51.634 14:38:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.634 14:38:00 -- nvmf/common.sh@120 -- # set +e 00:18:51.634 14:38:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.634 14:38:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.634 rmmod nvme_tcp 00:18:51.892 rmmod nvme_fabrics 00:18:51.892 rmmod nvme_keyring 00:18:51.892 14:38:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.892 14:38:00 -- nvmf/common.sh@124 -- # set -e 00:18:51.892 14:38:00 -- nvmf/common.sh@125 -- # return 0 00:18:51.892 14:38:00 -- nvmf/common.sh@478 -- # '[' -n 71255 ']' 00:18:51.892 14:38:00 -- nvmf/common.sh@479 -- # killprocess 71255 00:18:51.892 14:38:00 -- common/autotest_common.sh@936 -- # '[' -z 71255 ']' 00:18:51.892 14:38:00 -- common/autotest_common.sh@940 -- # kill -0 71255 00:18:51.892 14:38:00 -- common/autotest_common.sh@941 -- # uname 00:18:51.892 14:38:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.892 14:38:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71255 00:18:51.892 killing process with pid 71255 00:18:51.892 14:38:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:51.892 14:38:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:51.892 14:38:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71255' 00:18:51.892 14:38:00 -- common/autotest_common.sh@955 -- # kill 71255 00:18:51.892 [2024-04-17 14:38:00.284727] app.c: 930:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:18:51.892 14:38:00 -- common/autotest_common.sh@960 -- # wait 71255 00:18:52.151 14:38:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:52.151 14:38:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:52.151 14:38:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:52.151 14:38:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.151 14:38:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.151 14:38:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.151 14:38:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.151 14:38:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.151 14:38:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:52.151 00:18:52.151 real 0m1.802s 00:18:52.151 user 0m4.004s 00:18:52.151 sys 0m0.545s 00:18:52.151 14:38:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:52.151 14:38:00 -- common/autotest_common.sh@10 -- # set +x 00:18:52.151 ************************************ 00:18:52.151 END TEST nvmf_identify 00:18:52.151 ************************************ 00:18:52.151 14:38:00 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:52.151 14:38:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:52.151 14:38:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:52.151 14:38:00 -- common/autotest_common.sh@10 -- # set +x 00:18:52.151 ************************************ 00:18:52.151 START TEST nvmf_perf 00:18:52.151 ************************************ 00:18:52.151 14:38:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:52.151 * Looking for test storage... 00:18:52.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:52.151 14:38:00 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.151 14:38:00 -- nvmf/common.sh@7 -- # uname -s 00:18:52.151 14:38:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.151 14:38:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.151 14:38:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.151 14:38:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.151 14:38:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.151 14:38:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.151 14:38:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.151 14:38:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.151 14:38:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.151 14:38:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.151 14:38:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:18:52.151 14:38:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:18:52.151 14:38:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.151 14:38:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.151 14:38:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.151 14:38:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.151 14:38:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.151 14:38:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.151 14:38:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.151 14:38:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.152 14:38:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.152 14:38:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.152 14:38:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.152 14:38:00 -- paths/export.sh@5 -- # export PATH 00:18:52.152 14:38:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.152 14:38:00 -- nvmf/common.sh@47 -- # : 0 00:18:52.152 14:38:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.152 14:38:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.152 14:38:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.152 14:38:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.152 14:38:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.152 14:38:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.152 14:38:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.152 14:38:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.410 14:38:00 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:52.410 14:38:00 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:52.410 14:38:00 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:52.410 14:38:00 -- host/perf.sh@17 -- # nvmftestinit 00:18:52.410 14:38:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:52.410 14:38:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.410 14:38:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:52.410 14:38:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:52.410 14:38:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:52.410 14:38:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.410 14:38:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.410 14:38:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.410 14:38:00 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:52.410 14:38:00 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:52.410 14:38:00 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:52.410 14:38:00 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:52.410 14:38:00 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:52.410 14:38:00 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:52.410 14:38:00 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.410 14:38:00 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.410 14:38:00 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:52.410 14:38:00 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:52.410 14:38:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.410 14:38:00 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.410 14:38:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.410 14:38:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.410 14:38:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.410 14:38:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.410 14:38:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.410 14:38:00 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.410 14:38:00 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:52.410 14:38:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:52.410 Cannot find device "nvmf_tgt_br" 00:18:52.410 14:38:00 -- nvmf/common.sh@155 -- # true 00:18:52.410 14:38:00 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.410 Cannot find device "nvmf_tgt_br2" 00:18:52.410 14:38:00 -- nvmf/common.sh@156 -- # true 00:18:52.410 14:38:00 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:52.410 14:38:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:52.410 Cannot find device "nvmf_tgt_br" 00:18:52.410 14:38:00 -- nvmf/common.sh@158 -- # true 00:18:52.410 14:38:00 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:52.410 Cannot find device "nvmf_tgt_br2" 00:18:52.410 14:38:00 -- nvmf/common.sh@159 -- # true 00:18:52.410 14:38:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:52.410 14:38:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:52.410 14:38:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.410 14:38:00 -- nvmf/common.sh@162 -- # true 00:18:52.410 14:38:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.410 14:38:00 -- nvmf/common.sh@163 -- # true 00:18:52.410 14:38:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.410 14:38:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.410 14:38:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.410 14:38:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.410 14:38:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.410 14:38:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.410 14:38:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.410 14:38:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:52.410 14:38:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:52.410 14:38:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:52.410 14:38:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:52.410 14:38:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:52.410 14:38:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:52.410 14:38:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.410 14:38:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.410 14:38:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.669 14:38:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:52.669 14:38:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:52.669 14:38:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.669 14:38:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.669 14:38:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.669 14:38:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.669 14:38:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.669 14:38:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:52.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:18:52.669 00:18:52.669 --- 10.0.0.2 ping statistics --- 00:18:52.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.669 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:18:52.669 14:38:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:52.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:18:52.669 00:18:52.669 --- 10.0.0.3 ping statistics --- 00:18:52.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.669 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:52.669 14:38:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:18:52.669 00:18:52.669 --- 10.0.0.1 ping statistics --- 00:18:52.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.669 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:52.669 14:38:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.669 14:38:01 -- nvmf/common.sh@422 -- # return 0 00:18:52.669 14:38:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:52.669 14:38:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.669 14:38:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:52.669 14:38:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:52.669 14:38:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.669 14:38:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:52.669 14:38:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:52.669 14:38:01 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:52.669 14:38:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:52.669 14:38:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:52.669 14:38:01 -- common/autotest_common.sh@10 -- # set +x 00:18:52.669 14:38:01 -- nvmf/common.sh@470 -- # nvmfpid=71455 00:18:52.669 14:38:01 -- nvmf/common.sh@471 -- # waitforlisten 71455 00:18:52.669 14:38:01 -- common/autotest_common.sh@817 -- # '[' -z 71455 ']' 00:18:52.669 14:38:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.669 14:38:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:52.669 14:38:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.669 14:38:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:52.669 14:38:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:52.669 14:38:01 -- common/autotest_common.sh@10 -- # set +x 00:18:52.669 [2024-04-17 14:38:01.192145] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:52.669 [2024-04-17 14:38:01.192764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.927 [2024-04-17 14:38:01.335183] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.927 [2024-04-17 14:38:01.395770] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.927 [2024-04-17 14:38:01.396299] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.927 [2024-04-17 14:38:01.396552] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.927 [2024-04-17 14:38:01.396788] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.928 [2024-04-17 14:38:01.397020] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.928 [2024-04-17 14:38:01.397407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.928 [2024-04-17 14:38:01.397526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.928 [2024-04-17 14:38:01.397527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.928 [2024-04-17 14:38:01.397469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.862 14:38:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:53.862 14:38:02 -- common/autotest_common.sh@850 -- # return 0 00:18:53.862 14:38:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:53.862 14:38:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:53.862 14:38:02 -- common/autotest_common.sh@10 -- # set +x 00:18:53.862 14:38:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.862 14:38:02 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:53.862 14:38:02 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:54.123 14:38:02 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:54.123 14:38:02 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:54.381 14:38:02 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:54.381 14:38:02 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.947 14:38:03 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:54.947 14:38:03 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:54.947 14:38:03 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:54.947 14:38:03 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:54.947 14:38:03 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:54.947 [2024-04-17 14:38:03.496900] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.947 14:38:03 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:55.204 14:38:03 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:55.204 14:38:03 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.462 14:38:04 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:55.462 14:38:04 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:56.027 14:38:04 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.314 [2024-04-17 14:38:04.688022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.314 14:38:04 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:56.572 14:38:05 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:56.572 14:38:05 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:56.572 14:38:05 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:56.572 14:38:05 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:57.948 Initializing NVMe Controllers 00:18:57.948 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:57.948 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:57.948 Initialization complete. Launching workers. 00:18:57.948 ======================================================== 00:18:57.948 Latency(us) 00:18:57.948 Device Information : IOPS MiB/s Average min max 00:18:57.948 PCIE (0000:00:10.0) NSID 1 from core 0: 23839.98 93.12 1341.88 315.82 8807.30 00:18:57.948 ======================================================== 00:18:57.948 Total : 23839.98 93.12 1341.88 315.82 8807.30 00:18:57.948 00:18:57.948 14:38:06 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:59.323 Initializing NVMe Controllers 00:18:59.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:59.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:59.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:59.323 Initialization complete. Launching workers. 00:18:59.323 ======================================================== 00:18:59.323 Latency(us) 00:18:59.323 Device Information : IOPS MiB/s Average min max 00:18:59.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2881.92 11.26 346.63 116.40 4336.20 00:18:59.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.12 7918.09 12041.67 00:18:59.323 ======================================================== 00:18:59.323 Total : 3005.92 11.74 667.63 116.40 12041.67 00:18:59.323 00:18:59.323 14:38:07 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:00.707 Initializing NVMe Controllers 00:19:00.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:00.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:00.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:00.707 Initialization complete. Launching workers. 00:19:00.707 ======================================================== 00:19:00.707 Latency(us) 00:19:00.707 Device Information : IOPS MiB/s Average min max 00:19:00.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7156.25 27.95 4472.24 532.95 12785.13 00:19:00.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3885.22 15.18 8249.91 6175.99 17149.85 00:19:00.707 ======================================================== 00:19:00.707 Total : 11041.47 43.13 5801.51 532.95 17149.85 00:19:00.707 00:19:00.707 14:38:08 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:00.707 14:38:08 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:03.237 Initializing NVMe Controllers 00:19:03.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:03.237 Controller IO queue size 128, less than required. 00:19:03.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:03.237 Controller IO queue size 128, less than required. 00:19:03.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:03.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:03.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:03.237 Initialization complete. Launching workers. 00:19:03.237 ======================================================== 00:19:03.237 Latency(us) 00:19:03.237 Device Information : IOPS MiB/s Average min max 00:19:03.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1636.52 409.13 79348.17 46199.42 138381.03 00:19:03.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.59 150.65 222798.51 78984.71 399933.61 00:19:03.237 ======================================================== 00:19:03.237 Total : 2239.11 559.78 117953.42 46199.42 399933.61 00:19:03.237 00:19:03.237 14:38:11 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:19:03.237 No valid NVMe controllers or AIO or URING devices found 00:19:03.237 Initializing NVMe Controllers 00:19:03.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:03.237 Controller IO queue size 128, less than required. 00:19:03.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:03.237 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:03.237 Controller IO queue size 128, less than required. 00:19:03.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:03.237 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:03.237 WARNING: Some requested NVMe devices were skipped 00:19:03.237 14:38:11 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:19:05.765 Initializing NVMe Controllers 00:19:05.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:05.765 Controller IO queue size 128, less than required. 00:19:05.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:05.765 Controller IO queue size 128, less than required. 00:19:05.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:05.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:05.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:05.765 Initialization complete. Launching workers. 00:19:05.765 00:19:05.765 ==================== 00:19:05.765 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:05.765 TCP transport: 00:19:05.765 polls: 6799 00:19:05.765 idle_polls: 0 00:19:05.765 sock_completions: 6799 00:19:05.765 nvme_completions: 6375 00:19:05.765 submitted_requests: 9604 00:19:05.765 queued_requests: 1 00:19:05.765 00:19:05.765 ==================== 00:19:05.765 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:05.765 TCP transport: 00:19:05.766 polls: 7416 00:19:05.766 idle_polls: 0 00:19:05.766 sock_completions: 7416 00:19:05.766 nvme_completions: 6679 00:19:05.766 submitted_requests: 10074 00:19:05.766 queued_requests: 1 00:19:05.766 ======================================================== 00:19:05.766 Latency(us) 00:19:05.766 Device Information : IOPS MiB/s Average min max 00:19:05.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1593.41 398.35 81917.66 36917.77 148498.84 00:19:05.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1669.40 417.35 76915.03 37039.45 120990.11 00:19:05.766 ======================================================== 00:19:05.766 Total : 3262.81 815.70 79358.08 36917.77 148498.84 00:19:05.766 00:19:05.766 14:38:14 -- host/perf.sh@66 -- # sync 00:19:05.766 14:38:14 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.023 14:38:14 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:06.023 14:38:14 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:06.023 14:38:14 -- host/perf.sh@114 -- # nvmftestfini 00:19:06.023 14:38:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:06.023 14:38:14 -- nvmf/common.sh@117 -- # sync 00:19:06.023 14:38:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.023 14:38:14 -- nvmf/common.sh@120 -- # set +e 00:19:06.023 14:38:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.023 14:38:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.023 rmmod nvme_tcp 00:19:06.023 rmmod nvme_fabrics 00:19:06.023 rmmod nvme_keyring 00:19:06.023 14:38:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.023 14:38:14 -- nvmf/common.sh@124 -- # set -e 00:19:06.023 14:38:14 -- nvmf/common.sh@125 -- # return 0 00:19:06.023 14:38:14 -- nvmf/common.sh@478 -- # '[' -n 71455 ']' 00:19:06.023 14:38:14 -- nvmf/common.sh@479 -- # killprocess 71455 00:19:06.023 14:38:14 -- common/autotest_common.sh@936 -- # '[' -z 71455 ']' 00:19:06.023 14:38:14 -- common/autotest_common.sh@940 -- # kill -0 71455 00:19:06.023 14:38:14 -- common/autotest_common.sh@941 -- # uname 00:19:06.023 14:38:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.023 14:38:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71455 00:19:06.023 14:38:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:06.023 killing process with pid 71455 00:19:06.023 14:38:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:06.023 14:38:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71455' 00:19:06.023 14:38:14 -- common/autotest_common.sh@955 -- # kill 71455 00:19:06.023 14:38:14 -- common/autotest_common.sh@960 -- # wait 71455 00:19:06.957 14:38:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:06.957 14:38:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:06.957 14:38:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:06.957 14:38:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.957 14:38:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.957 14:38:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.957 14:38:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.957 14:38:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.957 14:38:15 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:06.957 ************************************ 00:19:06.957 END TEST nvmf_perf 00:19:06.957 ************************************ 00:19:06.957 00:19:06.957 real 0m14.697s 00:19:06.957 user 0m53.908s 00:19:06.957 sys 0m4.155s 00:19:06.957 14:38:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:06.957 14:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:06.957 14:38:15 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:06.957 14:38:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:06.957 14:38:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:06.957 14:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:06.957 ************************************ 00:19:06.957 START TEST nvmf_fio_host 00:19:06.957 ************************************ 00:19:06.957 14:38:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:06.957 * Looking for test storage... 00:19:06.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:06.957 14:38:15 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.958 14:38:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.958 14:38:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.958 14:38:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.958 14:38:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.958 14:38:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.958 14:38:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.958 14:38:15 -- paths/export.sh@5 -- # export PATH 00:19:06.958 14:38:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.958 14:38:15 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.958 14:38:15 -- nvmf/common.sh@7 -- # uname -s 00:19:07.216 14:38:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.216 14:38:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.216 14:38:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.216 14:38:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.216 14:38:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.216 14:38:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.216 14:38:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.216 14:38:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.216 14:38:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.216 14:38:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.216 14:38:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:19:07.216 14:38:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:19:07.216 14:38:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.216 14:38:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.216 14:38:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.216 14:38:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.216 14:38:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.216 14:38:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.216 14:38:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.216 14:38:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.217 14:38:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.217 14:38:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.217 14:38:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.217 14:38:15 -- paths/export.sh@5 -- # export PATH 00:19:07.217 14:38:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.217 14:38:15 -- nvmf/common.sh@47 -- # : 0 00:19:07.217 14:38:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.217 14:38:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.217 14:38:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.217 14:38:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.217 14:38:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.217 14:38:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.217 14:38:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.217 14:38:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.217 14:38:15 -- host/fio.sh@12 -- # nvmftestinit 00:19:07.217 14:38:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:07.217 14:38:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.217 14:38:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:07.217 14:38:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:07.217 14:38:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:07.217 14:38:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.217 14:38:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.217 14:38:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.217 14:38:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:07.217 14:38:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:07.217 14:38:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:07.217 14:38:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:07.217 14:38:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:07.217 14:38:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:07.217 14:38:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.217 14:38:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.217 14:38:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:07.217 14:38:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:07.217 14:38:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.217 14:38:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.217 14:38:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.217 14:38:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.217 14:38:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.217 14:38:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.217 14:38:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.217 14:38:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.217 14:38:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:07.217 14:38:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:07.217 Cannot find device "nvmf_tgt_br" 00:19:07.217 14:38:15 -- nvmf/common.sh@155 -- # true 00:19:07.217 14:38:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.217 Cannot find device "nvmf_tgt_br2" 00:19:07.217 14:38:15 -- nvmf/common.sh@156 -- # true 00:19:07.217 14:38:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:07.217 14:38:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:07.217 Cannot find device "nvmf_tgt_br" 00:19:07.217 14:38:15 -- nvmf/common.sh@158 -- # true 00:19:07.217 14:38:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:07.217 Cannot find device "nvmf_tgt_br2" 00:19:07.217 14:38:15 -- nvmf/common.sh@159 -- # true 00:19:07.217 14:38:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:07.217 14:38:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:07.217 14:38:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.217 14:38:15 -- nvmf/common.sh@162 -- # true 00:19:07.217 14:38:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.217 14:38:15 -- nvmf/common.sh@163 -- # true 00:19:07.217 14:38:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:07.217 14:38:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:07.217 14:38:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.217 14:38:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.217 14:38:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.217 14:38:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.217 14:38:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:07.217 14:38:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:07.217 14:38:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:07.217 14:38:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:07.217 14:38:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:07.217 14:38:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:07.477 14:38:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:07.477 14:38:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.477 14:38:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.477 14:38:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.477 14:38:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:07.477 14:38:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:07.477 14:38:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.477 14:38:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.477 14:38:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.477 14:38:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.477 14:38:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.477 14:38:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:07.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:19:07.477 00:19:07.477 --- 10.0.0.2 ping statistics --- 00:19:07.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.477 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:07.477 14:38:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:07.477 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.477 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:19:07.477 00:19:07.477 --- 10.0.0.3 ping statistics --- 00:19:07.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.477 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:07.477 14:38:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:07.477 00:19:07.477 --- 10.0.0.1 ping statistics --- 00:19:07.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.477 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:07.477 14:38:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.477 14:38:15 -- nvmf/common.sh@422 -- # return 0 00:19:07.477 14:38:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:07.477 14:38:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.477 14:38:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:07.477 14:38:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:07.477 14:38:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.477 14:38:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:07.477 14:38:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:07.477 14:38:15 -- host/fio.sh@14 -- # [[ y != y ]] 00:19:07.477 14:38:15 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:19:07.477 14:38:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:07.477 14:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:07.477 14:38:15 -- host/fio.sh@22 -- # nvmfpid=71873 00:19:07.477 14:38:15 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:07.477 14:38:15 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:07.477 14:38:15 -- host/fio.sh@26 -- # waitforlisten 71873 00:19:07.477 14:38:15 -- common/autotest_common.sh@817 -- # '[' -z 71873 ']' 00:19:07.477 14:38:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.477 14:38:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:07.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.477 14:38:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.477 14:38:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:07.477 14:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:07.477 [2024-04-17 14:38:15.995417] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:07.477 [2024-04-17 14:38:15.995683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.735 [2024-04-17 14:38:16.150519] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.735 [2024-04-17 14:38:16.226183] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.735 [2024-04-17 14:38:16.226461] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.735 [2024-04-17 14:38:16.226599] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.735 [2024-04-17 14:38:16.226824] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.735 [2024-04-17 14:38:16.226977] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.736 [2024-04-17 14:38:16.227207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.736 [2024-04-17 14:38:16.227347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.736 [2024-04-17 14:38:16.227820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.736 [2024-04-17 14:38:16.227859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.736 14:38:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:07.736 14:38:16 -- common/autotest_common.sh@850 -- # return 0 00:19:07.736 14:38:16 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.736 14:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.736 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:07.736 [2024-04-17 14:38:16.331297] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.032 14:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.032 14:38:16 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:19:08.032 14:38:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:08.032 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:08.032 14:38:16 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:08.032 14:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.032 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:08.032 Malloc1 00:19:08.032 14:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.032 14:38:16 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.032 14:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.032 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:08.032 14:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.032 14:38:16 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:08.032 14:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.032 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:08.032 14:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.032 14:38:16 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.032 14:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.032 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:08.032 [2024-04-17 14:38:16.423721] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.032 14:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.032 14:38:16 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:08.032 14:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.032 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:08.032 14:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.032 14:38:16 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:08.032 14:38:16 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:08.032 14:38:16 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:08.032 14:38:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:08.032 14:38:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:08.032 14:38:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:08.032 14:38:16 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:08.032 14:38:16 -- common/autotest_common.sh@1327 -- # shift 00:19:08.032 14:38:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:08.032 14:38:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.032 14:38:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:08.032 14:38:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:08.032 14:38:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:08.032 14:38:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:08.032 14:38:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:08.032 14:38:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.032 14:38:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:08.032 14:38:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:08.032 14:38:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:08.032 14:38:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:08.032 14:38:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:08.032 14:38:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:08.032 14:38:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:08.032 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:08.032 fio-3.35 00:19:08.032 Starting 1 thread 00:19:10.584 00:19:10.584 test: (groupid=0, jobs=1): err= 0: pid=71921: Wed Apr 17 14:38:18 2024 00:19:10.584 read: IOPS=6426, BW=25.1MiB/s (26.3MB/s)(50.6MiB/2017msec) 00:19:10.584 slat (usec): min=2, max=219, avg= 3.36, stdev= 2.67 00:19:10.584 clat (usec): min=3722, max=38730, avg=10322.58, stdev=4932.10 00:19:10.584 lat (usec): min=3752, max=38735, avg=10325.94, stdev=4932.33 00:19:10.584 clat percentiles (usec): 00:19:10.584 | 1.00th=[ 7046], 5.00th=[ 7373], 10.00th=[ 7504], 20.00th=[ 7767], 00:19:10.584 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8455], 00:19:10.584 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[20841], 95.00th=[22152], 00:19:10.584 | 99.00th=[23725], 99.50th=[24773], 99.90th=[33817], 99.95th=[36439], 00:19:10.584 | 99.99th=[38536] 00:19:10.584 bw ( KiB/s): min=15568, max=33568, per=100.00%, avg=25785.50, stdev=8725.58, samples=4 00:19:10.584 iops : min= 3892, max= 8392, avg=6446.25, stdev=2181.27, samples=4 00:19:10.584 write: IOPS=6434, BW=25.1MiB/s (26.4MB/s)(50.7MiB/2017msec); 0 zone resets 00:19:10.584 slat (usec): min=2, max=162, avg= 3.52, stdev= 2.21 00:19:10.584 clat (usec): min=1587, max=36986, avg=9495.38, stdev=4486.50 00:19:10.584 lat (usec): min=1596, max=36992, avg=9498.89, stdev=4486.75 00:19:10.584 clat percentiles (usec): 00:19:10.584 | 1.00th=[ 6521], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7111], 00:19:10.584 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7767], 00:19:10.584 | 70.00th=[ 8094], 80.00th=[ 9110], 90.00th=[18744], 95.00th=[20055], 00:19:10.584 | 99.00th=[21627], 99.50th=[22414], 99.90th=[33162], 99.95th=[33817], 00:19:10.584 | 99.99th=[36963] 00:19:10.584 bw ( KiB/s): min=16000, max=33024, per=100.00%, avg=25810.00, stdev=8054.66, samples=4 00:19:10.584 iops : min= 4000, max= 8256, avg=6452.50, stdev=2013.66, samples=4 00:19:10.584 lat (msec) : 2=0.01%, 4=0.03%, 10=81.78%, 20=9.14%, 50=9.04% 00:19:10.584 cpu : usr=66.22%, sys=24.90%, ctx=57, majf=0, minf=6 00:19:10.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:10.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.584 issued rwts: total=12963,12978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.584 00:19:10.584 Run status group 0 (all jobs): 00:19:10.584 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.6MiB (53.1MB), run=2017-2017msec 00:19:10.584 WRITE: bw=25.1MiB/s (26.4MB/s), 25.1MiB/s-25.1MiB/s (26.4MB/s-26.4MB/s), io=50.7MiB (53.2MB), run=2017-2017msec 00:19:10.584 14:38:18 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:10.584 14:38:18 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:10.584 14:38:18 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:10.584 14:38:18 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.584 14:38:18 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:10.584 14:38:18 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:10.584 14:38:18 -- common/autotest_common.sh@1327 -- # shift 00:19:10.584 14:38:18 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:10.584 14:38:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.584 14:38:18 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:10.584 14:38:18 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:10.584 14:38:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:10.584 14:38:18 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:10.584 14:38:18 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:10.584 14:38:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.584 14:38:18 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:10.584 14:38:18 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:10.584 14:38:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:10.584 14:38:18 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:10.584 14:38:18 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:10.584 14:38:18 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:10.584 14:38:18 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:10.584 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:10.584 fio-3.35 00:19:10.584 Starting 1 thread 00:19:13.128 00:19:13.128 test: (groupid=0, jobs=1): err= 0: pid=71969: Wed Apr 17 14:38:21 2024 00:19:13.128 read: IOPS=6961, BW=109MiB/s (114MB/s)(218MiB/2006msec) 00:19:13.128 slat (usec): min=3, max=159, avg= 4.44, stdev= 2.38 00:19:13.128 clat (usec): min=1555, max=23513, avg=10290.77, stdev=3395.52 00:19:13.128 lat (usec): min=1558, max=23517, avg=10295.21, stdev=3395.55 00:19:13.128 clat percentiles (usec): 00:19:13.128 | 1.00th=[ 4359], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7177], 00:19:13.128 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10814], 00:19:13.128 | 70.00th=[12256], 80.00th=[13566], 90.00th=[15139], 95.00th=[16319], 00:19:13.128 | 99.00th=[18482], 99.50th=[19268], 99.90th=[20317], 99.95th=[21890], 00:19:13.128 | 99.99th=[23462] 00:19:13.128 bw ( KiB/s): min=48640, max=66016, per=51.00%, avg=56808.00, stdev=7565.56, samples=4 00:19:13.128 iops : min= 3040, max= 4126, avg=3550.50, stdev=472.85, samples=4 00:19:13.128 write: IOPS=4052, BW=63.3MiB/s (66.4MB/s)(116MiB/1830msec); 0 zone resets 00:19:13.128 slat (usec): min=37, max=294, avg=41.61, stdev= 7.99 00:19:13.128 clat (usec): min=3990, max=25932, avg=14186.85, stdev=2853.51 00:19:13.128 lat (usec): min=4033, max=25970, avg=14228.46, stdev=2853.26 00:19:13.128 clat percentiles (usec): 00:19:13.128 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10683], 20.00th=[11863], 00:19:13.128 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13829], 60.00th=[14615], 00:19:13.128 | 70.00th=[15401], 80.00th=[16319], 90.00th=[17957], 95.00th=[19268], 00:19:13.128 | 99.00th=[22414], 99.50th=[23200], 99.90th=[24249], 99.95th=[24511], 00:19:13.128 | 99.99th=[25822] 00:19:13.128 bw ( KiB/s): min=49440, max=68928, per=90.87%, avg=58920.00, stdev=8750.99, samples=4 00:19:13.128 iops : min= 3090, max= 4308, avg=3682.50, stdev=546.94, samples=4 00:19:13.128 lat (msec) : 2=0.04%, 4=0.25%, 10=35.43%, 20=62.98%, 50=1.31% 00:19:13.128 cpu : usr=77.06%, sys=17.01%, ctx=17, majf=0, minf=21 00:19:13.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:13.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.128 issued rwts: total=13965,7416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.128 00:19:13.128 Run status group 0 (all jobs): 00:19:13.128 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=218MiB (229MB), run=2006-2006msec 00:19:13.128 WRITE: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=116MiB (122MB), run=1830-1830msec 00:19:13.128 14:38:21 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.128 14:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.128 14:38:21 -- common/autotest_common.sh@10 -- # set +x 00:19:13.128 14:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.128 14:38:21 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:19:13.128 14:38:21 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:19:13.128 14:38:21 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:19:13.128 14:38:21 -- host/fio.sh@84 -- # nvmftestfini 00:19:13.128 14:38:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:13.128 14:38:21 -- nvmf/common.sh@117 -- # sync 00:19:13.128 14:38:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:13.128 14:38:21 -- nvmf/common.sh@120 -- # set +e 00:19:13.128 14:38:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.128 14:38:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:13.128 rmmod nvme_tcp 00:19:13.128 rmmod nvme_fabrics 00:19:13.128 rmmod nvme_keyring 00:19:13.128 14:38:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.128 14:38:21 -- nvmf/common.sh@124 -- # set -e 00:19:13.128 14:38:21 -- nvmf/common.sh@125 -- # return 0 00:19:13.128 14:38:21 -- nvmf/common.sh@478 -- # '[' -n 71873 ']' 00:19:13.128 14:38:21 -- nvmf/common.sh@479 -- # killprocess 71873 00:19:13.128 14:38:21 -- common/autotest_common.sh@936 -- # '[' -z 71873 ']' 00:19:13.128 14:38:21 -- common/autotest_common.sh@940 -- # kill -0 71873 00:19:13.128 14:38:21 -- common/autotest_common.sh@941 -- # uname 00:19:13.128 14:38:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:13.128 14:38:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71873 00:19:13.128 killing process with pid 71873 00:19:13.128 14:38:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:13.128 14:38:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:13.128 14:38:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71873' 00:19:13.128 14:38:21 -- common/autotest_common.sh@955 -- # kill 71873 00:19:13.128 14:38:21 -- common/autotest_common.sh@960 -- # wait 71873 00:19:13.387 14:38:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:13.387 14:38:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:13.387 14:38:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:13.387 14:38:21 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:13.387 14:38:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:13.387 14:38:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.387 14:38:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.387 14:38:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.387 14:38:21 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:13.387 ************************************ 00:19:13.387 END TEST nvmf_fio_host 00:19:13.387 ************************************ 00:19:13.387 00:19:13.387 real 0m6.369s 00:19:13.387 user 0m24.356s 00:19:13.387 sys 0m2.261s 00:19:13.387 14:38:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:13.387 14:38:21 -- common/autotest_common.sh@10 -- # set +x 00:19:13.387 14:38:21 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:13.387 14:38:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:13.387 14:38:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.387 14:38:21 -- common/autotest_common.sh@10 -- # set +x 00:19:13.387 ************************************ 00:19:13.387 START TEST nvmf_failover 00:19:13.387 ************************************ 00:19:13.387 14:38:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:13.645 * Looking for test storage... 00:19:13.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:13.645 14:38:22 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:13.645 14:38:22 -- nvmf/common.sh@7 -- # uname -s 00:19:13.645 14:38:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.645 14:38:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.645 14:38:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.645 14:38:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.645 14:38:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.645 14:38:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.645 14:38:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.645 14:38:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.645 14:38:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.645 14:38:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.645 14:38:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:19:13.645 14:38:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:19:13.645 14:38:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.645 14:38:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.645 14:38:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:13.645 14:38:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.645 14:38:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.645 14:38:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.645 14:38:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.645 14:38:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.645 14:38:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.645 14:38:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.645 14:38:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.645 14:38:22 -- paths/export.sh@5 -- # export PATH 00:19:13.645 14:38:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.645 14:38:22 -- nvmf/common.sh@47 -- # : 0 00:19:13.645 14:38:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:13.645 14:38:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:13.645 14:38:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.645 14:38:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.645 14:38:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.645 14:38:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:13.645 14:38:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:13.645 14:38:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:13.645 14:38:22 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:13.645 14:38:22 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:13.645 14:38:22 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.645 14:38:22 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.645 14:38:22 -- host/failover.sh@18 -- # nvmftestinit 00:19:13.645 14:38:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:13.645 14:38:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.645 14:38:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:13.645 14:38:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:13.645 14:38:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:13.645 14:38:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.645 14:38:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.645 14:38:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.645 14:38:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:13.645 14:38:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:13.645 14:38:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:13.645 14:38:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:13.645 14:38:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:13.645 14:38:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:13.645 14:38:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.645 14:38:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.645 14:38:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:13.645 14:38:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:13.645 14:38:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:13.645 14:38:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:13.645 14:38:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:13.645 14:38:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.645 14:38:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:13.645 14:38:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:13.645 14:38:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:13.646 14:38:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:13.646 14:38:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:13.646 14:38:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:13.646 Cannot find device "nvmf_tgt_br" 00:19:13.646 14:38:22 -- nvmf/common.sh@155 -- # true 00:19:13.646 14:38:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:13.646 Cannot find device "nvmf_tgt_br2" 00:19:13.646 14:38:22 -- nvmf/common.sh@156 -- # true 00:19:13.646 14:38:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:13.646 14:38:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:13.646 Cannot find device "nvmf_tgt_br" 00:19:13.646 14:38:22 -- nvmf/common.sh@158 -- # true 00:19:13.646 14:38:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:13.646 Cannot find device "nvmf_tgt_br2" 00:19:13.646 14:38:22 -- nvmf/common.sh@159 -- # true 00:19:13.646 14:38:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:13.646 14:38:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:13.646 14:38:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:13.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.646 14:38:22 -- nvmf/common.sh@162 -- # true 00:19:13.646 14:38:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:13.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.646 14:38:22 -- nvmf/common.sh@163 -- # true 00:19:13.646 14:38:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:13.646 14:38:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:13.646 14:38:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:13.646 14:38:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:13.646 14:38:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:13.646 14:38:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:13.646 14:38:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:13.646 14:38:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:13.646 14:38:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:13.904 14:38:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:13.904 14:38:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:13.904 14:38:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:13.904 14:38:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:13.904 14:38:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:13.904 14:38:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:13.904 14:38:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:13.904 14:38:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:13.905 14:38:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:13.905 14:38:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:13.905 14:38:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:13.905 14:38:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:13.905 14:38:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:13.905 14:38:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:13.905 14:38:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:13.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:19:13.905 00:19:13.905 --- 10.0.0.2 ping statistics --- 00:19:13.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.905 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:13.905 14:38:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:13.905 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:13.905 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:19:13.905 00:19:13.905 --- 10.0.0.3 ping statistics --- 00:19:13.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.905 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:13.905 14:38:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:13.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:19:13.905 00:19:13.905 --- 10.0.0.1 ping statistics --- 00:19:13.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.905 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:13.905 14:38:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.905 14:38:22 -- nvmf/common.sh@422 -- # return 0 00:19:13.905 14:38:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:13.905 14:38:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.905 14:38:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:13.905 14:38:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:13.905 14:38:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.905 14:38:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:13.905 14:38:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:13.905 14:38:22 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:13.905 14:38:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:13.905 14:38:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:13.905 14:38:22 -- common/autotest_common.sh@10 -- # set +x 00:19:13.905 14:38:22 -- nvmf/common.sh@470 -- # nvmfpid=72182 00:19:13.905 14:38:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:13.905 14:38:22 -- nvmf/common.sh@471 -- # waitforlisten 72182 00:19:13.905 14:38:22 -- common/autotest_common.sh@817 -- # '[' -z 72182 ']' 00:19:13.905 14:38:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.905 14:38:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:13.905 14:38:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.905 14:38:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:13.905 14:38:22 -- common/autotest_common.sh@10 -- # set +x 00:19:13.905 [2024-04-17 14:38:22.454716] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:13.905 [2024-04-17 14:38:22.454808] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.163 [2024-04-17 14:38:22.591773] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:14.163 [2024-04-17 14:38:22.652090] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.163 [2024-04-17 14:38:22.652181] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.163 [2024-04-17 14:38:22.652202] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.163 [2024-04-17 14:38:22.652214] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.163 [2024-04-17 14:38:22.652226] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.163 [2024-04-17 14:38:22.652358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.163 [2024-04-17 14:38:22.652943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.163 [2024-04-17 14:38:22.653001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.097 14:38:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:15.097 14:38:23 -- common/autotest_common.sh@850 -- # return 0 00:19:15.097 14:38:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:15.097 14:38:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:15.097 14:38:23 -- common/autotest_common.sh@10 -- # set +x 00:19:15.097 14:38:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.097 14:38:23 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:15.355 [2024-04-17 14:38:23.741679] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.355 14:38:23 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:15.613 Malloc0 00:19:15.613 14:38:24 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:15.871 14:38:24 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:16.129 14:38:24 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.387 [2024-04-17 14:38:24.862041] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.387 14:38:24 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:16.646 [2024-04-17 14:38:25.118287] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:16.646 14:38:25 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:16.904 [2024-04-17 14:38:25.378588] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:16.904 14:38:25 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:16.904 14:38:25 -- host/failover.sh@31 -- # bdevperf_pid=72245 00:19:16.904 14:38:25 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.904 14:38:25 -- host/failover.sh@34 -- # waitforlisten 72245 /var/tmp/bdevperf.sock 00:19:16.904 14:38:25 -- common/autotest_common.sh@817 -- # '[' -z 72245 ']' 00:19:16.904 14:38:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.904 14:38:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:16.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.904 14:38:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.904 14:38:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:16.904 14:38:25 -- common/autotest_common.sh@10 -- # set +x 00:19:17.162 14:38:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.162 14:38:25 -- common/autotest_common.sh@850 -- # return 0 00:19:17.162 14:38:25 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:17.729 NVMe0n1 00:19:17.729 14:38:26 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:17.988 00:19:17.988 14:38:26 -- host/failover.sh@39 -- # run_test_pid=72261 00:19:17.988 14:38:26 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:17.988 14:38:26 -- host/failover.sh@41 -- # sleep 1 00:19:18.924 14:38:27 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.182 [2024-04-17 14:38:27.631069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd640 is same with the state(5) to be set 00:19:19.182 [2024-04-17 14:38:27.631159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd640 is same with the state(5) to be set 00:19:19.182 [2024-04-17 14:38:27.631178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd640 is same with the state(5) to be set 00:19:19.182 [2024-04-17 14:38:27.631193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd640 is same with the state(5) to be set 00:19:19.182 [2024-04-17 14:38:27.631207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd640 is same with the state(5) to be set 00:19:19.182 [2024-04-17 14:38:27.631221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd640 is same with the state(5) to be set 00:19:19.182 [2024-04-17 14:38:27.631234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd640 is same with the state(5) to be set 00:19:19.182 14:38:27 -- host/failover.sh@45 -- # sleep 3 00:19:22.492 14:38:30 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:22.492 00:19:22.492 14:38:30 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:22.751 [2024-04-17 14:38:31.234409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 [2024-04-17 14:38:31.234647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bdd20 is same with the state(5) to be set 00:19:22.751 14:38:31 -- host/failover.sh@50 -- # sleep 3 00:19:26.036 14:38:34 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.036 [2024-04-17 14:38:34.556280] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.036 14:38:34 -- host/failover.sh@55 -- # sleep 1 00:19:27.410 14:38:35 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:27.410 [2024-04-17 14:38:35.962494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 [2024-04-17 14:38:35.962563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 [2024-04-17 14:38:35.962574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 [2024-04-17 14:38:35.962584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 [2024-04-17 14:38:35.962592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 [2024-04-17 14:38:35.962601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 [2024-04-17 14:38:35.962609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 [2024-04-17 14:38:35.962617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 [2024-04-17 14:38:35.962625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bc690 is same with the state(5) to be set 00:19:27.410 14:38:35 -- host/failover.sh@59 -- # wait 72261 00:19:33.983 0 00:19:33.983 14:38:41 -- host/failover.sh@61 -- # killprocess 72245 00:19:33.983 14:38:41 -- common/autotest_common.sh@936 -- # '[' -z 72245 ']' 00:19:33.983 14:38:41 -- common/autotest_common.sh@940 -- # kill -0 72245 00:19:33.983 14:38:41 -- common/autotest_common.sh@941 -- # uname 00:19:33.983 14:38:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:33.983 14:38:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72245 00:19:33.983 14:38:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:33.983 14:38:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:33.983 14:38:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72245' 00:19:33.983 killing process with pid 72245 00:19:33.983 14:38:41 -- common/autotest_common.sh@955 -- # kill 72245 00:19:33.984 14:38:41 -- common/autotest_common.sh@960 -- # wait 72245 00:19:33.984 14:38:41 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.984 [2024-04-17 14:38:25.442487] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:33.984 [2024-04-17 14:38:25.442644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72245 ] 00:19:33.984 [2024-04-17 14:38:25.580379] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.984 [2024-04-17 14:38:25.637340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.984 Running I/O for 15 seconds... 00:19:33.984 [2024-04-17 14:38:27.631337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.631408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.631471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.631526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.631587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.631649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.631701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.631759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.631818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.631878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.631938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.631978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.631995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.984 [2024-04-17 14:38:27.632658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.632695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.632726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.632759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.632791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.632821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.984 [2024-04-17 14:38:27.632859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.984 [2024-04-17 14:38:27.632876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.632890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.632907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.632921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.632937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.632980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.985 [2024-04-17 14:38:27.633756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.633787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.633818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.633856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.633887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.633918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.633957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.633976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.633991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.634007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.634021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.634038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.634052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.634069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.634083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.634106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.634121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.634138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-17 14:38:27.634152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-17 14:38:27.634168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.634183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.634213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.634244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.634277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.634965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.634985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.635000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.635031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.635062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.635093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.635123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.635174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.635205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-17 14:38:27.635236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.635267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.635298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.635328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.635362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.635401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.635432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.635464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-17 14:38:27.635480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.986 [2024-04-17 14:38:27.635495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.987 [2024-04-17 14:38:27.635525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.987 [2024-04-17 14:38:27.635556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.987 [2024-04-17 14:38:27.635587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.987 [2024-04-17 14:38:27.635618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:27.635648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:27.635679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:27.635710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:27.635740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:27.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:27.635809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:27.635840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e2960 is same with the state(5) to be set 00:19:33.987 [2024-04-17 14:38:27.635879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:33.987 [2024-04-17 14:38:27.635890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:33.987 [2024-04-17 14:38:27.635902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54984 len:8 PRP1 0x0 PRP2 0x0 00:19:33.987 [2024-04-17 14:38:27.635915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.635989] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21e2960 was disconnected and freed. reset controller. 00:19:33.987 [2024-04-17 14:38:27.636011] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:33.987 [2024-04-17 14:38:27.636092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.987 [2024-04-17 14:38:27.636115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.636131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.987 [2024-04-17 14:38:27.636145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.636159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.987 [2024-04-17 14:38:27.636173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.636187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.987 [2024-04-17 14:38:27.636201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:27.636215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.987 [2024-04-17 14:38:27.636289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217c1d0 (9): Bad file descriptor 00:19:33.987 [2024-04-17 14:38:27.640839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.987 [2024-04-17 14:38:27.681708] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:33.987 [2024-04-17 14:38:31.233132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.987 [2024-04-17 14:38:31.233222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.233264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.987 [2024-04-17 14:38:31.233328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.233353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.987 [2024-04-17 14:38:31.233376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.233398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.987 [2024-04-17 14:38:31.233420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.233443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c1d0 is same with the state(5) to be set 00:19:33.987 [2024-04-17 14:38:31.234731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.234777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.234823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.234865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.234896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.234922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.234968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.234998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-17 14:38:31.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-17 14:38:31.235682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.235710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.235735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.235765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.235790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.235819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.235847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.235877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.235901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.235930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.235971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.236582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.236635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.236688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.236745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.236815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.236869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.236923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.236979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-17 14:38:31.237502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-17 14:38:31.237545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.988 [2024-04-17 14:38:31.237572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.237602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.237627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.237654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.237680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.237708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.237734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.237762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.237787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.237817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.237841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.237871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.237896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.237925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.237968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.238850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.238903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.238937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.238986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.239043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.239097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.239151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.239205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.239258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.989 [2024-04-17 14:38:31.239311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.989 [2024-04-17 14:38:31.239785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.989 [2024-04-17 14:38:31.239811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.239840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.239864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.239893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.239918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.239962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.239991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.240261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.240317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.240385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.240442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.240496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.240551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.240605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.240658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.240929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.240991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.241200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.241255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.241308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.241363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.241416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.241470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.241524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.990 [2024-04-17 14:38:31.241578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.241961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.990 [2024-04-17 14:38:31.241990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.990 [2024-04-17 14:38:31.242018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f910 is same with the state(5) to be set 00:19:33.990 [2024-04-17 14:38:31.242052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:33.990 [2024-04-17 14:38:31.242072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:33.990 [2024-04-17 14:38:31.242092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6576 len:8 PRP1 0x0 PRP2 0x0 00:19:33.991 [2024-04-17 14:38:31.242117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:31.242200] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x217f910 was disconnected and freed. reset controller. 00:19:33.991 [2024-04-17 14:38:31.242234] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:19:33.991 [2024-04-17 14:38:31.242259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.991 [2024-04-17 14:38:31.247076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.991 [2024-04-17 14:38:31.247206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217c1d0 (9): Bad file descriptor 00:19:33.991 [2024-04-17 14:38:31.278157] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:33.991 [2024-04-17 14:38:35.963402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.963968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.963994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.964806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.964853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.964902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.964928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.964991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.965024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.965051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.965092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.965117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.965142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.965165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.965192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.965218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.965252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.991 [2024-04-17 14:38:35.965275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.965301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.965324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.965351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.965374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.991 [2024-04-17 14:38:35.965400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.991 [2024-04-17 14:38:35.965424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.965939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.965984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.992 [2024-04-17 14:38:35.966357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.992 [2024-04-17 14:38:35.966422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.992 [2024-04-17 14:38:35.966474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.992 [2024-04-17 14:38:35.966526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.992 [2024-04-17 14:38:35.966575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.992 [2024-04-17 14:38:35.966625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.992 [2024-04-17 14:38:35.966676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.992 [2024-04-17 14:38:35.966727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.966929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.966981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.967008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.967034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.967059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.967104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.967131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.967159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.967182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.967209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.992 [2024-04-17 14:38:35.967235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.992 [2024-04-17 14:38:35.967261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.967700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.967763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.967814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.967867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.967916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.967961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.967988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.968041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.968091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.968142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.968918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.968979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.969009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.969037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.969063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.969090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.969113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.969155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.993 [2024-04-17 14:38:35.969181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.969207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.969231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.969259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.969283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.969310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.969336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.969365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.993 [2024-04-17 14:38:35.969390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.993 [2024-04-17 14:38:35.969416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.969938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.969982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.970036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.970084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.970136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.970185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.994 [2024-04-17 14:38:35.970238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:33.994 [2024-04-17 14:38:35.970369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:33.994 [2024-04-17 14:38:35.970391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30760 len:8 PRP1 0x0 PRP2 0x0 00:19:33.994 [2024-04-17 14:38:35.970414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970495] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x217f910 was disconnected and freed. reset controller. 00:19:33.994 [2024-04-17 14:38:35.970528] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:19:33.994 [2024-04-17 14:38:35.970637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.994 [2024-04-17 14:38:35.970671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.994 [2024-04-17 14:38:35.970736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.994 [2024-04-17 14:38:35.970784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.994 [2024-04-17 14:38:35.970832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.994 [2024-04-17 14:38:35.970855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.994 [2024-04-17 14:38:35.970966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217c1d0 (9): Bad file descriptor 00:19:33.994 [2024-04-17 14:38:35.975456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.994 [2024-04-17 14:38:36.018303] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:33.994 00:19:33.994 Latency(us) 00:19:33.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.994 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:33.994 Verification LBA range: start 0x0 length 0x4000 00:19:33.994 NVMe0n1 : 15.01 7296.32 28.50 207.63 0.00 17020.82 681.43 37653.41 00:19:33.994 =================================================================================================================== 00:19:33.994 Total : 7296.32 28.50 207.63 0.00 17020.82 681.43 37653.41 00:19:33.994 Received shutdown signal, test time was about 15.000000 seconds 00:19:33.994 00:19:33.994 Latency(us) 00:19:33.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.994 =================================================================================================================== 00:19:33.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.994 14:38:41 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:33.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.994 14:38:41 -- host/failover.sh@65 -- # count=3 00:19:33.994 14:38:41 -- host/failover.sh@67 -- # (( count != 3 )) 00:19:33.994 14:38:41 -- host/failover.sh@73 -- # bdevperf_pid=72434 00:19:33.994 14:38:41 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:33.994 14:38:41 -- host/failover.sh@75 -- # waitforlisten 72434 /var/tmp/bdevperf.sock 00:19:33.994 14:38:41 -- common/autotest_common.sh@817 -- # '[' -z 72434 ']' 00:19:33.994 14:38:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.994 14:38:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:33.994 14:38:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.994 14:38:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:33.994 14:38:41 -- common/autotest_common.sh@10 -- # set +x 00:19:34.560 14:38:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:34.561 14:38:42 -- common/autotest_common.sh@850 -- # return 0 00:19:34.561 14:38:42 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:34.818 [2024-04-17 14:38:43.272161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:34.818 14:38:43 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:35.384 [2024-04-17 14:38:43.680591] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:35.384 14:38:43 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:35.646 NVMe0n1 00:19:35.646 14:38:44 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:36.211 00:19:36.211 14:38:44 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:36.469 00:19:36.469 14:38:45 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:36.469 14:38:45 -- host/failover.sh@82 -- # grep -q NVMe0 00:19:37.035 14:38:45 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:37.294 14:38:45 -- host/failover.sh@87 -- # sleep 3 00:19:40.600 14:38:48 -- host/failover.sh@88 -- # grep -q NVMe0 00:19:40.600 14:38:48 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:40.600 14:38:49 -- host/failover.sh@90 -- # run_test_pid=72528 00:19:40.600 14:38:49 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.600 14:38:49 -- host/failover.sh@92 -- # wait 72528 00:19:41.977 0 00:19:41.977 14:38:50 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:41.977 [2024-04-17 14:38:41.870709] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:41.977 [2024-04-17 14:38:41.871665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72434 ] 00:19:41.977 [2024-04-17 14:38:42.010098] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.977 [2024-04-17 14:38:42.093146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.977 [2024-04-17 14:38:45.715635] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:41.977 [2024-04-17 14:38:45.715779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.977 [2024-04-17 14:38:45.715804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.977 [2024-04-17 14:38:45.715823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.977 [2024-04-17 14:38:45.715836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.977 [2024-04-17 14:38:45.715850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.977 [2024-04-17 14:38:45.715864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.977 [2024-04-17 14:38:45.715878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.977 [2024-04-17 14:38:45.715891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.977 [2024-04-17 14:38:45.715905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.977 [2024-04-17 14:38:45.715972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.977 [2024-04-17 14:38:45.716008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c81d0 (9): Bad file descriptor 00:19:41.977 [2024-04-17 14:38:45.721140] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:41.977 Running I/O for 1 seconds... 00:19:41.977 00:19:41.977 Latency(us) 00:19:41.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.977 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:41.977 Verification LBA range: start 0x0 length 0x4000 00:19:41.977 NVMe0n1 : 1.03 3014.83 11.78 0.00 0.00 42024.10 5510.98 48615.80 00:19:41.977 =================================================================================================================== 00:19:41.977 Total : 3014.83 11.78 0.00 0.00 42024.10 5510.98 48615.80 00:19:41.977 14:38:50 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:41.977 14:38:50 -- host/failover.sh@95 -- # grep -q NVMe0 00:19:41.977 14:38:50 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:42.541 14:38:50 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:42.541 14:38:50 -- host/failover.sh@99 -- # grep -q NVMe0 00:19:42.798 14:38:51 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:43.056 14:38:51 -- host/failover.sh@101 -- # sleep 3 00:19:46.354 14:38:54 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:46.354 14:38:54 -- host/failover.sh@103 -- # grep -q NVMe0 00:19:46.354 14:38:54 -- host/failover.sh@108 -- # killprocess 72434 00:19:46.354 14:38:54 -- common/autotest_common.sh@936 -- # '[' -z 72434 ']' 00:19:46.354 14:38:54 -- common/autotest_common.sh@940 -- # kill -0 72434 00:19:46.354 14:38:54 -- common/autotest_common.sh@941 -- # uname 00:19:46.354 14:38:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.354 14:38:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72434 00:19:46.354 killing process with pid 72434 00:19:46.354 14:38:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:46.354 14:38:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:46.354 14:38:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72434' 00:19:46.354 14:38:54 -- common/autotest_common.sh@955 -- # kill 72434 00:19:46.354 14:38:54 -- common/autotest_common.sh@960 -- # wait 72434 00:19:46.610 14:38:55 -- host/failover.sh@110 -- # sync 00:19:46.868 14:38:55 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.128 14:38:55 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:47.128 14:38:55 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:47.128 14:38:55 -- host/failover.sh@116 -- # nvmftestfini 00:19:47.128 14:38:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:47.128 14:38:55 -- nvmf/common.sh@117 -- # sync 00:19:47.128 14:38:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.128 14:38:55 -- nvmf/common.sh@120 -- # set +e 00:19:47.128 14:38:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.128 14:38:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.128 rmmod nvme_tcp 00:19:47.128 rmmod nvme_fabrics 00:19:47.128 rmmod nvme_keyring 00:19:47.128 14:38:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.128 14:38:55 -- nvmf/common.sh@124 -- # set -e 00:19:47.128 14:38:55 -- nvmf/common.sh@125 -- # return 0 00:19:47.128 14:38:55 -- nvmf/common.sh@478 -- # '[' -n 72182 ']' 00:19:47.128 14:38:55 -- nvmf/common.sh@479 -- # killprocess 72182 00:19:47.128 14:38:55 -- common/autotest_common.sh@936 -- # '[' -z 72182 ']' 00:19:47.128 14:38:55 -- common/autotest_common.sh@940 -- # kill -0 72182 00:19:47.128 14:38:55 -- common/autotest_common.sh@941 -- # uname 00:19:47.128 14:38:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.128 14:38:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72182 00:19:47.128 killing process with pid 72182 00:19:47.128 14:38:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:47.128 14:38:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:47.128 14:38:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72182' 00:19:47.128 14:38:55 -- common/autotest_common.sh@955 -- # kill 72182 00:19:47.128 14:38:55 -- common/autotest_common.sh@960 -- # wait 72182 00:19:47.387 14:38:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:47.387 14:38:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:47.387 14:38:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:47.387 14:38:55 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.387 14:38:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.387 14:38:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.387 14:38:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.387 14:38:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.388 14:38:55 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:47.388 00:19:47.388 real 0m33.942s 00:19:47.388 user 2m11.425s 00:19:47.388 sys 0m6.489s 00:19:47.388 14:38:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:47.388 ************************************ 00:19:47.388 14:38:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.388 END TEST nvmf_failover 00:19:47.388 ************************************ 00:19:47.388 14:38:55 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:47.388 14:38:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:47.388 14:38:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:47.388 14:38:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.646 ************************************ 00:19:47.646 START TEST nvmf_discovery 00:19:47.646 ************************************ 00:19:47.646 14:38:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:47.646 * Looking for test storage... 00:19:47.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.646 14:38:56 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.646 14:38:56 -- nvmf/common.sh@7 -- # uname -s 00:19:47.646 14:38:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.647 14:38:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.647 14:38:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.647 14:38:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.647 14:38:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.647 14:38:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.647 14:38:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.647 14:38:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.647 14:38:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.647 14:38:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.647 14:38:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:19:47.647 14:38:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:19:47.647 14:38:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.647 14:38:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.647 14:38:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.647 14:38:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.647 14:38:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.647 14:38:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.647 14:38:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.647 14:38:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.647 14:38:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.647 14:38:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.647 14:38:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.647 14:38:56 -- paths/export.sh@5 -- # export PATH 00:19:47.647 14:38:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.647 14:38:56 -- nvmf/common.sh@47 -- # : 0 00:19:47.647 14:38:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.647 14:38:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.647 14:38:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.647 14:38:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.647 14:38:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.647 14:38:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.647 14:38:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.647 14:38:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.647 14:38:56 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:47.647 14:38:56 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:47.647 14:38:56 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:47.647 14:38:56 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:47.647 14:38:56 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:47.647 14:38:56 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:47.647 14:38:56 -- host/discovery.sh@25 -- # nvmftestinit 00:19:47.647 14:38:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:47.647 14:38:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.647 14:38:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:47.647 14:38:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:47.647 14:38:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:47.647 14:38:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.647 14:38:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.647 14:38:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.647 14:38:56 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:47.647 14:38:56 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:47.647 14:38:56 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:47.647 14:38:56 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:47.647 14:38:56 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:47.647 14:38:56 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:47.647 14:38:56 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.647 14:38:56 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.647 14:38:56 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.647 14:38:56 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:47.647 14:38:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.647 14:38:56 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.647 14:38:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.647 14:38:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.647 14:38:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.647 14:38:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.647 14:38:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.647 14:38:56 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.647 14:38:56 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:47.647 14:38:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:47.647 Cannot find device "nvmf_tgt_br" 00:19:47.647 14:38:56 -- nvmf/common.sh@155 -- # true 00:19:47.647 14:38:56 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.647 Cannot find device "nvmf_tgt_br2" 00:19:47.647 14:38:56 -- nvmf/common.sh@156 -- # true 00:19:47.647 14:38:56 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:47.647 14:38:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:47.647 Cannot find device "nvmf_tgt_br" 00:19:47.647 14:38:56 -- nvmf/common.sh@158 -- # true 00:19:47.647 14:38:56 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:47.647 Cannot find device "nvmf_tgt_br2" 00:19:47.647 14:38:56 -- nvmf/common.sh@159 -- # true 00:19:47.647 14:38:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:47.647 14:38:56 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:47.647 14:38:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.647 14:38:56 -- nvmf/common.sh@162 -- # true 00:19:47.647 14:38:56 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.647 14:38:56 -- nvmf/common.sh@163 -- # true 00:19:47.647 14:38:56 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.647 14:38:56 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:48.002 14:38:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:48.002 14:38:56 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:48.002 14:38:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:48.002 14:38:56 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:48.002 14:38:56 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:48.002 14:38:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:48.002 14:38:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:48.002 14:38:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:48.002 14:38:56 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:48.002 14:38:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:48.002 14:38:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:48.002 14:38:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.002 14:38:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.002 14:38:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.002 14:38:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:48.002 14:38:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:48.002 14:38:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:48.002 14:38:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:48.002 14:38:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:48.002 14:38:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:48.002 14:38:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.002 14:38:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:48.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:19:48.002 00:19:48.002 --- 10.0.0.2 ping statistics --- 00:19:48.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.002 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:48.002 14:38:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:48.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:19:48.002 00:19:48.002 --- 10.0.0.3 ping statistics --- 00:19:48.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.002 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:48.002 14:38:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:48.002 00:19:48.002 --- 10.0.0.1 ping statistics --- 00:19:48.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.002 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:48.002 14:38:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.002 14:38:56 -- nvmf/common.sh@422 -- # return 0 00:19:48.002 14:38:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:48.002 14:38:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.002 14:38:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:48.002 14:38:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:48.002 14:38:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.002 14:38:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:48.002 14:38:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:48.002 14:38:56 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:48.002 14:38:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:48.002 14:38:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:48.002 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.002 14:38:56 -- nvmf/common.sh@470 -- # nvmfpid=72805 00:19:48.002 14:38:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.002 14:38:56 -- nvmf/common.sh@471 -- # waitforlisten 72805 00:19:48.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.002 14:38:56 -- common/autotest_common.sh@817 -- # '[' -z 72805 ']' 00:19:48.002 14:38:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.002 14:38:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:48.002 14:38:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.002 14:38:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:48.002 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.002 [2024-04-17 14:38:56.538258] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:48.002 [2024-04-17 14:38:56.538363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.261 [2024-04-17 14:38:56.671772] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.261 [2024-04-17 14:38:56.756391] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.261 [2024-04-17 14:38:56.757012] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.261 [2024-04-17 14:38:56.757207] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.261 [2024-04-17 14:38:56.757371] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.261 [2024-04-17 14:38:56.757502] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.261 [2024-04-17 14:38:56.757673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.520 14:38:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:48.520 14:38:56 -- common/autotest_common.sh@850 -- # return 0 00:19:48.520 14:38:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:48.520 14:38:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:48.520 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.520 14:38:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.520 14:38:56 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.520 14:38:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.520 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.520 [2024-04-17 14:38:56.898185] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.520 14:38:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.520 14:38:56 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:48.520 14:38:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.520 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.520 [2024-04-17 14:38:56.906360] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:48.520 14:38:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.520 14:38:56 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:48.520 14:38:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.520 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.520 null0 00:19:48.520 14:38:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.520 14:38:56 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:48.520 14:38:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.520 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.520 null1 00:19:48.520 14:38:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.520 14:38:56 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:48.520 14:38:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.520 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.520 14:38:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.520 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:48.520 14:38:56 -- host/discovery.sh@45 -- # hostpid=72825 00:19:48.520 14:38:56 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:48.520 14:38:56 -- host/discovery.sh@46 -- # waitforlisten 72825 /tmp/host.sock 00:19:48.520 14:38:56 -- common/autotest_common.sh@817 -- # '[' -z 72825 ']' 00:19:48.520 14:38:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:48.520 14:38:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:48.520 14:38:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:48.520 14:38:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:48.520 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.520 [2024-04-17 14:38:56.994142] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:48.520 [2024-04-17 14:38:56.994263] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72825 ] 00:19:48.778 [2024-04-17 14:38:57.133909] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.778 [2024-04-17 14:38:57.192235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.712 14:38:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:49.712 14:38:58 -- common/autotest_common.sh@850 -- # return 0 00:19:49.712 14:38:58 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:49.712 14:38:58 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:49.712 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.712 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.712 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.712 14:38:58 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:49.712 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.712 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.712 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.712 14:38:58 -- host/discovery.sh@72 -- # notify_id=0 00:19:49.712 14:38:58 -- host/discovery.sh@83 -- # get_subsystem_names 00:19:49.712 14:38:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:49.712 14:38:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:49.712 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.712 14:38:58 -- host/discovery.sh@59 -- # sort 00:19:49.712 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.712 14:38:58 -- host/discovery.sh@59 -- # xargs 00:19:49.712 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.712 14:38:58 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:49.712 14:38:58 -- host/discovery.sh@84 -- # get_bdev_list 00:19:49.712 14:38:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:49.712 14:38:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.712 14:38:58 -- host/discovery.sh@55 -- # sort 00:19:49.712 14:38:58 -- host/discovery.sh@55 -- # xargs 00:19:49.712 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.712 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.712 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.712 14:38:58 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:49.712 14:38:58 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:49.712 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.712 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.712 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.712 14:38:58 -- host/discovery.sh@87 -- # get_subsystem_names 00:19:49.712 14:38:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:49.712 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.712 14:38:58 -- host/discovery.sh@59 -- # sort 00:19:49.712 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.712 14:38:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:49.712 14:38:58 -- host/discovery.sh@59 -- # xargs 00:19:49.712 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.971 14:38:58 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:49.971 14:38:58 -- host/discovery.sh@88 -- # get_bdev_list 00:19:49.971 14:38:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.971 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.971 14:38:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:49.971 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.971 14:38:58 -- host/discovery.sh@55 -- # sort 00:19:49.971 14:38:58 -- host/discovery.sh@55 -- # xargs 00:19:49.971 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.971 14:38:58 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:49.971 14:38:58 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:49.971 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.971 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.971 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.971 14:38:58 -- host/discovery.sh@91 -- # get_subsystem_names 00:19:49.971 14:38:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:49.971 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.971 14:38:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:49.971 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.971 14:38:58 -- host/discovery.sh@59 -- # sort 00:19:49.971 14:38:58 -- host/discovery.sh@59 -- # xargs 00:19:49.971 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.971 14:38:58 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:49.971 14:38:58 -- host/discovery.sh@92 -- # get_bdev_list 00:19:49.971 14:38:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.971 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.971 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.971 14:38:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:49.971 14:38:58 -- host/discovery.sh@55 -- # xargs 00:19:49.971 14:38:58 -- host/discovery.sh@55 -- # sort 00:19:49.971 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.971 14:38:58 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:49.971 14:38:58 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:49.971 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.971 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.971 [2024-04-17 14:38:58.507164] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.971 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.971 14:38:58 -- host/discovery.sh@97 -- # get_subsystem_names 00:19:49.971 14:38:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:49.971 14:38:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:49.971 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.971 14:38:58 -- host/discovery.sh@59 -- # sort 00:19:49.971 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.971 14:38:58 -- host/discovery.sh@59 -- # xargs 00:19:49.971 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.971 14:38:58 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:50.230 14:38:58 -- host/discovery.sh@98 -- # get_bdev_list 00:19:50.230 14:38:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.230 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.230 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:50.230 14:38:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:50.230 14:38:58 -- host/discovery.sh@55 -- # sort 00:19:50.230 14:38:58 -- host/discovery.sh@55 -- # xargs 00:19:50.230 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.230 14:38:58 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:50.230 14:38:58 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:50.230 14:38:58 -- host/discovery.sh@79 -- # expected_count=0 00:19:50.230 14:38:58 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:50.230 14:38:58 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:50.230 14:38:58 -- common/autotest_common.sh@901 -- # local max=10 00:19:50.230 14:38:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:50.230 14:38:58 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:50.230 14:38:58 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:50.230 14:38:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:50.230 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.230 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:50.230 14:38:58 -- host/discovery.sh@74 -- # jq '. | length' 00:19:50.230 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.230 14:38:58 -- host/discovery.sh@74 -- # notification_count=0 00:19:50.230 14:38:58 -- host/discovery.sh@75 -- # notify_id=0 00:19:50.230 14:38:58 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:50.230 14:38:58 -- common/autotest_common.sh@904 -- # return 0 00:19:50.230 14:38:58 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:50.230 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.230 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:50.230 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.230 14:38:58 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:50.230 14:38:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:50.230 14:38:58 -- common/autotest_common.sh@901 -- # local max=10 00:19:50.230 14:38:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:50.230 14:38:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:50.230 14:38:58 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:50.230 14:38:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:50.230 14:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:50.230 14:38:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:50.230 14:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:50.230 14:38:58 -- host/discovery.sh@59 -- # sort 00:19:50.230 14:38:58 -- host/discovery.sh@59 -- # xargs 00:19:50.230 14:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:50.230 14:38:58 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:19:50.230 14:38:58 -- common/autotest_common.sh@906 -- # sleep 1 00:19:50.796 [2024-04-17 14:38:59.131810] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:50.796 [2024-04-17 14:38:59.131857] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:50.796 [2024-04-17 14:38:59.131878] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:50.796 [2024-04-17 14:38:59.137872] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:50.796 [2024-04-17 14:38:59.194236] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:50.796 [2024-04-17 14:38:59.194285] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:51.363 14:38:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:51.363 14:38:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:51.363 14:38:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:51.363 14:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.363 14:38:59 -- host/discovery.sh@59 -- # sort 00:19:51.363 14:38:59 -- common/autotest_common.sh@10 -- # set +x 00:19:51.363 14:38:59 -- host/discovery.sh@59 -- # xargs 00:19:51.363 14:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.363 14:38:59 -- common/autotest_common.sh@904 -- # return 0 00:19:51.363 14:38:59 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:51.363 14:38:59 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:51.363 14:38:59 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.363 14:38:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:51.363 14:38:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.363 14:38:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:51.363 14:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.363 14:38:59 -- common/autotest_common.sh@10 -- # set +x 00:19:51.363 14:38:59 -- host/discovery.sh@55 -- # xargs 00:19:51.363 14:38:59 -- host/discovery.sh@55 -- # sort 00:19:51.363 14:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:51.363 14:38:59 -- common/autotest_common.sh@904 -- # return 0 00:19:51.363 14:38:59 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:51.363 14:38:59 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:51.363 14:38:59 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.363 14:38:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:51.363 14:38:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:51.363 14:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.363 14:38:59 -- common/autotest_common.sh@10 -- # set +x 00:19:51.363 14:38:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:51.363 14:38:59 -- host/discovery.sh@63 -- # sort -n 00:19:51.363 14:38:59 -- host/discovery.sh@63 -- # xargs 00:19:51.363 14:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:19:51.363 14:38:59 -- common/autotest_common.sh@904 -- # return 0 00:19:51.363 14:38:59 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:51.363 14:38:59 -- host/discovery.sh@79 -- # expected_count=1 00:19:51.363 14:38:59 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:51.363 14:38:59 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:51.363 14:38:59 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.363 14:38:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:51.363 14:38:59 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:51.363 14:38:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:51.363 14:38:59 -- host/discovery.sh@74 -- # jq '. | length' 00:19:51.363 14:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.363 14:38:59 -- common/autotest_common.sh@10 -- # set +x 00:19:51.363 14:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.622 14:38:59 -- host/discovery.sh@74 -- # notification_count=1 00:19:51.622 14:38:59 -- host/discovery.sh@75 -- # notify_id=1 00:19:51.622 14:38:59 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:51.622 14:38:59 -- common/autotest_common.sh@904 -- # return 0 00:19:51.622 14:38:59 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:51.622 14:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.622 14:38:59 -- common/autotest_common.sh@10 -- # set +x 00:19:51.622 14:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.622 14:38:59 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:51.622 14:38:59 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:51.622 14:38:59 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.622 14:38:59 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.622 14:38:59 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:51.622 14:38:59 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:51.622 14:38:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.622 14:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.622 14:38:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:51.622 14:38:59 -- common/autotest_common.sh@10 -- # set +x 00:19:51.622 14:38:59 -- host/discovery.sh@55 -- # sort 00:19:51.622 14:38:59 -- host/discovery.sh@55 -- # xargs 00:19:51.622 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.622 14:39:00 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:51.622 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:51.622 14:39:00 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:51.622 14:39:00 -- host/discovery.sh@79 -- # expected_count=1 00:19:51.622 14:39:00 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:51.622 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:51.622 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.622 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.622 14:39:00 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:51.622 14:39:00 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:51.622 14:39:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:51.622 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.622 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.622 14:39:00 -- host/discovery.sh@74 -- # jq '. | length' 00:19:51.622 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.622 14:39:00 -- host/discovery.sh@74 -- # notification_count=1 00:19:51.622 14:39:00 -- host/discovery.sh@75 -- # notify_id=2 00:19:51.622 14:39:00 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:51.622 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:51.622 14:39:00 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:51.622 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.622 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.622 [2024-04-17 14:39:00.104968] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:51.622 [2024-04-17 14:39:00.106380] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:51.622 [2024-04-17 14:39:00.106622] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:51.622 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.622 14:39:00 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:51.623 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:51.623 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.623 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.623 14:39:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:51.623 14:39:00 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:51.623 14:39:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:51.623 [2024-04-17 14:39:00.112372] bdev_nvme.c:6822:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:19:51.623 14:39:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:51.623 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.623 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.623 14:39:00 -- host/discovery.sh@59 -- # sort 00:19:51.623 14:39:00 -- host/discovery.sh@59 -- # xargs 00:19:51.623 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.623 [2024-04-17 14:39:00.174777] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:51.623 [2024-04-17 14:39:00.174826] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:51.623 [2024-04-17 14:39:00.174841] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:51.623 14:39:00 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.623 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:51.623 14:39:00 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:51.623 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:51.623 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.623 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.623 14:39:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:51.623 14:39:00 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:51.623 14:39:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.623 14:39:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:51.623 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.623 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.623 14:39:00 -- host/discovery.sh@55 -- # sort 00:19:51.623 14:39:00 -- host/discovery.sh@55 -- # xargs 00:19:51.623 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:51.882 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:51.882 14:39:00 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:51.882 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:51.882 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.882 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:51.882 14:39:00 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:51.882 14:39:00 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:51.882 14:39:00 -- host/discovery.sh@63 -- # sort -n 00:19:51.882 14:39:00 -- host/discovery.sh@63 -- # xargs 00:19:51.882 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.882 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.882 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:51.882 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:51.882 14:39:00 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:51.882 14:39:00 -- host/discovery.sh@79 -- # expected_count=0 00:19:51.882 14:39:00 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:51.882 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:51.882 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.882 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:51.882 14:39:00 -- host/discovery.sh@74 -- # jq '. | length' 00:19:51.882 14:39:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:51.882 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.882 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.882 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.882 14:39:00 -- host/discovery.sh@74 -- # notification_count=0 00:19:51.882 14:39:00 -- host/discovery.sh@75 -- # notify_id=2 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:51.882 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:51.882 14:39:00 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.882 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.882 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.882 [2024-04-17 14:39:00.358342] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:51.882 [2024-04-17 14:39:00.358654] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:51.882 [2024-04-17 14:39:00.361649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.882 [2024-04-17 14:39:00.361714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.882 [2024-04-17 14:39:00.361738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.882 [2024-04-17 14:39:00.361755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.882 [2024-04-17 14:39:00.361773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.882 [2024-04-17 14:39:00.361788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.882 [2024-04-17 14:39:00.361805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.882 [2024-04-17 14:39:00.361821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.882 [2024-04-17 14:39:00.361836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1090fa0 is same with the state(5) to be set 00:19:51.882 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.882 14:39:00 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:51.882 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:51.882 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.882 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:51.882 [2024-04-17 14:39:00.364339] bdev_nvme.c:6685:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:51.882 [2024-04-17 14:39:00.364391] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:51.882 [2024-04-17 14:39:00.364493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1090fa0 (9): Bad file descriptor 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:51.882 14:39:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:51.882 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.882 14:39:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:51.882 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.882 14:39:00 -- host/discovery.sh@59 -- # sort 00:19:51.882 14:39:00 -- host/discovery.sh@59 -- # xargs 00:19:51.882 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.882 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:51.882 14:39:00 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:51.882 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:51.882 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:51.882 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:51.882 14:39:00 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:51.882 14:39:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.882 14:39:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:51.882 14:39:00 -- host/discovery.sh@55 -- # sort 00:19:51.882 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.882 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:51.882 14:39:00 -- host/discovery.sh@55 -- # xargs 00:19:51.882 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.141 14:39:00 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:52.141 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:52.141 14:39:00 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:52.141 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:52.141 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:52.141 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:52.141 14:39:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:52.141 14:39:00 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:52.141 14:39:00 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:52.141 14:39:00 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:52.141 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.141 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.141 14:39:00 -- host/discovery.sh@63 -- # xargs 00:19:52.141 14:39:00 -- host/discovery.sh@63 -- # sort -n 00:19:52.141 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.141 14:39:00 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:19:52.141 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:52.141 14:39:00 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:52.141 14:39:00 -- host/discovery.sh@79 -- # expected_count=0 00:19:52.141 14:39:00 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:52.141 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:52.141 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:52.141 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:52.142 14:39:00 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:52.142 14:39:00 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:52.142 14:39:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:52.142 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.142 14:39:00 -- host/discovery.sh@74 -- # jq '. | length' 00:19:52.142 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.142 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.142 14:39:00 -- host/discovery.sh@74 -- # notification_count=0 00:19:52.142 14:39:00 -- host/discovery.sh@75 -- # notify_id=2 00:19:52.142 14:39:00 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:52.142 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:52.142 14:39:00 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:52.142 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.142 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.142 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.142 14:39:00 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:52.142 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:52.142 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:52.142 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:52.142 14:39:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:52.142 14:39:00 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:52.142 14:39:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:52.142 14:39:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:52.142 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.142 14:39:00 -- host/discovery.sh@59 -- # sort 00:19:52.142 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.142 14:39:00 -- host/discovery.sh@59 -- # xargs 00:19:52.142 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.142 14:39:00 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:19:52.142 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:52.142 14:39:00 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:52.142 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:52.142 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:52.142 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:52.142 14:39:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:52.142 14:39:00 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:52.142 14:39:00 -- host/discovery.sh@55 -- # sort 00:19:52.142 14:39:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:52.142 14:39:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:52.142 14:39:00 -- host/discovery.sh@55 -- # xargs 00:19:52.142 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.142 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.142 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.400 14:39:00 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:19:52.400 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:52.400 14:39:00 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:52.400 14:39:00 -- host/discovery.sh@79 -- # expected_count=2 00:19:52.400 14:39:00 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:52.400 14:39:00 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:52.400 14:39:00 -- common/autotest_common.sh@901 -- # local max=10 00:19:52.400 14:39:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:52.400 14:39:00 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:52.400 14:39:00 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:52.400 14:39:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:52.400 14:39:00 -- host/discovery.sh@74 -- # jq '. | length' 00:19:52.400 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.400 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.400 14:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.400 14:39:00 -- host/discovery.sh@74 -- # notification_count=2 00:19:52.400 14:39:00 -- host/discovery.sh@75 -- # notify_id=4 00:19:52.401 14:39:00 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:52.401 14:39:00 -- common/autotest_common.sh@904 -- # return 0 00:19:52.401 14:39:00 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:52.401 14:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.401 14:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:53.334 [2024-04-17 14:39:01.842856] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:53.334 [2024-04-17 14:39:01.843215] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:53.334 [2024-04-17 14:39:01.843270] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:53.334 [2024-04-17 14:39:01.848926] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:53.334 [2024-04-17 14:39:01.909432] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:53.334 [2024-04-17 14:39:01.909791] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:53.334 14:39:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.334 14:39:01 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:53.334 14:39:01 -- common/autotest_common.sh@638 -- # local es=0 00:19:53.334 14:39:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:53.334 14:39:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:53.334 14:39:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.334 14:39:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:53.334 14:39:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.334 14:39:01 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:53.334 14:39:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.334 14:39:01 -- common/autotest_common.sh@10 -- # set +x 00:19:53.334 request: 00:19:53.334 { 00:19:53.334 "name": "nvme", 00:19:53.334 "trtype": "tcp", 00:19:53.334 "traddr": "10.0.0.2", 00:19:53.334 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:53.334 "adrfam": "ipv4", 00:19:53.334 "trsvcid": "8009", 00:19:53.334 "wait_for_attach": true, 00:19:53.334 "method": "bdev_nvme_start_discovery", 00:19:53.334 "req_id": 1 00:19:53.334 } 00:19:53.334 Got JSON-RPC error response 00:19:53.334 response: 00:19:53.334 { 00:19:53.334 "code": -17, 00:19:53.334 "message": "File exists" 00:19:53.334 } 00:19:53.334 14:39:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:53.334 14:39:01 -- common/autotest_common.sh@641 -- # es=1 00:19:53.334 14:39:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:53.334 14:39:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:53.334 14:39:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:53.334 14:39:01 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:53.334 14:39:01 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:53.334 14:39:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.334 14:39:01 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:53.334 14:39:01 -- common/autotest_common.sh@10 -- # set +x 00:19:53.334 14:39:01 -- host/discovery.sh@67 -- # sort 00:19:53.334 14:39:01 -- host/discovery.sh@67 -- # xargs 00:19:53.592 14:39:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.592 14:39:01 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:53.592 14:39:01 -- host/discovery.sh@146 -- # get_bdev_list 00:19:53.592 14:39:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:53.592 14:39:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.592 14:39:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:53.592 14:39:01 -- common/autotest_common.sh@10 -- # set +x 00:19:53.592 14:39:01 -- host/discovery.sh@55 -- # sort 00:19:53.592 14:39:01 -- host/discovery.sh@55 -- # xargs 00:19:53.592 14:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.592 14:39:02 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:53.592 14:39:02 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:53.592 14:39:02 -- common/autotest_common.sh@638 -- # local es=0 00:19:53.592 14:39:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:53.592 14:39:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:53.592 14:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.592 14:39:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:53.592 14:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.592 14:39:02 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:53.592 14:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.592 14:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.592 request: 00:19:53.592 { 00:19:53.592 "name": "nvme_second", 00:19:53.592 "trtype": "tcp", 00:19:53.592 "traddr": "10.0.0.2", 00:19:53.592 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:53.592 "adrfam": "ipv4", 00:19:53.592 "trsvcid": "8009", 00:19:53.592 "wait_for_attach": true, 00:19:53.592 "method": "bdev_nvme_start_discovery", 00:19:53.592 "req_id": 1 00:19:53.592 } 00:19:53.592 Got JSON-RPC error response 00:19:53.592 response: 00:19:53.592 { 00:19:53.592 "code": -17, 00:19:53.592 "message": "File exists" 00:19:53.592 } 00:19:53.592 14:39:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:53.592 14:39:02 -- common/autotest_common.sh@641 -- # es=1 00:19:53.592 14:39:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:53.592 14:39:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:53.592 14:39:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:53.592 14:39:02 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:53.592 14:39:02 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:53.592 14:39:02 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:53.592 14:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.592 14:39:02 -- host/discovery.sh@67 -- # sort 00:19:53.592 14:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.592 14:39:02 -- host/discovery.sh@67 -- # xargs 00:19:53.592 14:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.592 14:39:02 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:53.592 14:39:02 -- host/discovery.sh@152 -- # get_bdev_list 00:19:53.592 14:39:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:53.592 14:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.592 14:39:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:53.592 14:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.592 14:39:02 -- host/discovery.sh@55 -- # sort 00:19:53.592 14:39:02 -- host/discovery.sh@55 -- # xargs 00:19:53.592 14:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.592 14:39:02 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:53.592 14:39:02 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:53.592 14:39:02 -- common/autotest_common.sh@638 -- # local es=0 00:19:53.592 14:39:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:53.592 14:39:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:53.592 14:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.592 14:39:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:53.592 14:39:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.592 14:39:02 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:53.592 14:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.592 14:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:54.971 [2024-04-17 14:39:03.194479] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.971 [2024-04-17 14:39:03.194694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.971 [2024-04-17 14:39:03.194774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.971 [2024-04-17 14:39:03.194802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1120290 with addr=10.0.0.2, port=8010 00:19:54.971 [2024-04-17 14:39:03.194829] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:54.971 [2024-04-17 14:39:03.194845] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:54.971 [2024-04-17 14:39:03.194860] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:55.905 [2024-04-17 14:39:04.194386] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.905 [2024-04-17 14:39:04.194498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.905 [2024-04-17 14:39:04.194544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.905 [2024-04-17 14:39:04.194561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8d50 with addr=10.0.0.2, port=8010 00:19:55.905 [2024-04-17 14:39:04.194580] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:55.905 [2024-04-17 14:39:04.194590] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:55.905 [2024-04-17 14:39:04.194599] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:56.839 [2024-04-17 14:39:05.194219] bdev_nvme.c:6941:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:56.839 request: 00:19:56.839 { 00:19:56.839 "name": "nvme_second", 00:19:56.839 "trtype": "tcp", 00:19:56.839 "traddr": "10.0.0.2", 00:19:56.839 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:56.839 "adrfam": "ipv4", 00:19:56.839 "trsvcid": "8010", 00:19:56.839 "attach_timeout_ms": 3000, 00:19:56.839 "method": "bdev_nvme_start_discovery", 00:19:56.839 "req_id": 1 00:19:56.839 } 00:19:56.839 Got JSON-RPC error response 00:19:56.839 response: 00:19:56.839 { 00:19:56.839 "code": -110, 00:19:56.839 "message": "Connection timed out" 00:19:56.839 } 00:19:56.839 14:39:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:56.839 14:39:05 -- common/autotest_common.sh@641 -- # es=1 00:19:56.839 14:39:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:56.839 14:39:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:56.839 14:39:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:56.839 14:39:05 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:56.839 14:39:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:56.839 14:39:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:56.839 14:39:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.839 14:39:05 -- host/discovery.sh@67 -- # sort 00:19:56.839 14:39:05 -- common/autotest_common.sh@10 -- # set +x 00:19:56.839 14:39:05 -- host/discovery.sh@67 -- # xargs 00:19:56.839 14:39:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.839 14:39:05 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:56.839 14:39:05 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:56.839 14:39:05 -- host/discovery.sh@161 -- # kill 72825 00:19:56.839 14:39:05 -- host/discovery.sh@162 -- # nvmftestfini 00:19:56.839 14:39:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:56.839 14:39:05 -- nvmf/common.sh@117 -- # sync 00:19:56.839 14:39:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.839 14:39:05 -- nvmf/common.sh@120 -- # set +e 00:19:56.839 14:39:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.839 14:39:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.839 rmmod nvme_tcp 00:19:56.839 rmmod nvme_fabrics 00:19:56.839 rmmod nvme_keyring 00:19:56.839 14:39:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.839 14:39:05 -- nvmf/common.sh@124 -- # set -e 00:19:56.839 14:39:05 -- nvmf/common.sh@125 -- # return 0 00:19:56.839 14:39:05 -- nvmf/common.sh@478 -- # '[' -n 72805 ']' 00:19:56.839 14:39:05 -- nvmf/common.sh@479 -- # killprocess 72805 00:19:56.839 14:39:05 -- common/autotest_common.sh@936 -- # '[' -z 72805 ']' 00:19:56.839 14:39:05 -- common/autotest_common.sh@940 -- # kill -0 72805 00:19:56.839 14:39:05 -- common/autotest_common.sh@941 -- # uname 00:19:56.839 14:39:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.840 14:39:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72805 00:19:56.840 killing process with pid 72805 00:19:56.840 14:39:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:56.840 14:39:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:56.840 14:39:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72805' 00:19:56.840 14:39:05 -- common/autotest_common.sh@955 -- # kill 72805 00:19:56.840 14:39:05 -- common/autotest_common.sh@960 -- # wait 72805 00:19:57.098 14:39:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:57.098 14:39:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:57.098 14:39:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:57.098 14:39:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.098 14:39:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.098 14:39:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.098 14:39:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.098 14:39:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.098 14:39:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:57.098 00:19:57.098 real 0m9.577s 00:19:57.098 user 0m19.306s 00:19:57.098 sys 0m1.773s 00:19:57.098 14:39:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:57.098 14:39:05 -- common/autotest_common.sh@10 -- # set +x 00:19:57.098 ************************************ 00:19:57.098 END TEST nvmf_discovery 00:19:57.098 ************************************ 00:19:57.098 14:39:05 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:57.098 14:39:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:57.098 14:39:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:57.098 14:39:05 -- common/autotest_common.sh@10 -- # set +x 00:19:57.357 ************************************ 00:19:57.357 START TEST nvmf_discovery_remove_ifc 00:19:57.357 ************************************ 00:19:57.357 14:39:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:57.357 * Looking for test storage... 00:19:57.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:57.357 14:39:05 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.357 14:39:05 -- nvmf/common.sh@7 -- # uname -s 00:19:57.357 14:39:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.357 14:39:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.357 14:39:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.357 14:39:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.357 14:39:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.357 14:39:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.357 14:39:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.357 14:39:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.357 14:39:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.357 14:39:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.357 14:39:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:19:57.357 14:39:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:19:57.357 14:39:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.357 14:39:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.357 14:39:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.357 14:39:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.357 14:39:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.357 14:39:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.357 14:39:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.357 14:39:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.357 14:39:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.357 14:39:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.357 14:39:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.357 14:39:05 -- paths/export.sh@5 -- # export PATH 00:19:57.357 14:39:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.357 14:39:05 -- nvmf/common.sh@47 -- # : 0 00:19:57.357 14:39:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.357 14:39:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.357 14:39:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.358 14:39:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.358 14:39:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.358 14:39:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.358 14:39:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.358 14:39:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.358 14:39:05 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:57.358 14:39:05 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:57.358 14:39:05 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:57.358 14:39:05 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:57.358 14:39:05 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:57.358 14:39:05 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:57.358 14:39:05 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:57.358 14:39:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:57.358 14:39:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.358 14:39:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:57.358 14:39:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:57.358 14:39:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:57.358 14:39:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.358 14:39:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.358 14:39:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.358 14:39:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:57.358 14:39:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:57.358 14:39:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:57.358 14:39:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:57.358 14:39:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:57.358 14:39:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:57.358 14:39:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.358 14:39:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.358 14:39:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:57.358 14:39:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:57.358 14:39:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.358 14:39:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.358 14:39:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.358 14:39:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.358 14:39:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.358 14:39:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.358 14:39:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.358 14:39:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.358 14:39:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:57.358 14:39:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:57.358 Cannot find device "nvmf_tgt_br" 00:19:57.358 14:39:05 -- nvmf/common.sh@155 -- # true 00:19:57.358 14:39:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.358 Cannot find device "nvmf_tgt_br2" 00:19:57.358 14:39:05 -- nvmf/common.sh@156 -- # true 00:19:57.358 14:39:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:57.358 14:39:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:57.358 Cannot find device "nvmf_tgt_br" 00:19:57.358 14:39:05 -- nvmf/common.sh@158 -- # true 00:19:57.358 14:39:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:57.358 Cannot find device "nvmf_tgt_br2" 00:19:57.358 14:39:05 -- nvmf/common.sh@159 -- # true 00:19:57.358 14:39:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:57.358 14:39:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:57.358 14:39:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.358 14:39:05 -- nvmf/common.sh@162 -- # true 00:19:57.358 14:39:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.358 14:39:05 -- nvmf/common.sh@163 -- # true 00:19:57.358 14:39:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.358 14:39:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.358 14:39:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.358 14:39:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.358 14:39:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.358 14:39:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.617 14:39:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.617 14:39:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:57.617 14:39:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:57.617 14:39:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:57.617 14:39:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:57.617 14:39:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:57.617 14:39:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:57.617 14:39:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.617 14:39:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.617 14:39:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.617 14:39:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:57.617 14:39:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:57.617 14:39:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.617 14:39:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.617 14:39:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.617 14:39:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.617 14:39:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.617 14:39:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:57.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:19:57.617 00:19:57.617 --- 10.0.0.2 ping statistics --- 00:19:57.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.617 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:57.617 14:39:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:57.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:19:57.617 00:19:57.617 --- 10.0.0.3 ping statistics --- 00:19:57.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.617 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:57.617 14:39:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:19:57.617 00:19:57.617 --- 10.0.0.1 ping statistics --- 00:19:57.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.617 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:57.617 14:39:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.617 14:39:06 -- nvmf/common.sh@422 -- # return 0 00:19:57.617 14:39:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:57.617 14:39:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.617 14:39:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:57.617 14:39:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:57.617 14:39:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.617 14:39:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:57.617 14:39:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:57.617 14:39:06 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:57.617 14:39:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:57.617 14:39:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:57.617 14:39:06 -- common/autotest_common.sh@10 -- # set +x 00:19:57.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.617 14:39:06 -- nvmf/common.sh@470 -- # nvmfpid=73279 00:19:57.617 14:39:06 -- nvmf/common.sh@471 -- # waitforlisten 73279 00:19:57.617 14:39:06 -- common/autotest_common.sh@817 -- # '[' -z 73279 ']' 00:19:57.617 14:39:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.617 14:39:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:57.617 14:39:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.617 14:39:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:57.617 14:39:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:57.617 14:39:06 -- common/autotest_common.sh@10 -- # set +x 00:19:57.617 [2024-04-17 14:39:06.176795] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:57.617 [2024-04-17 14:39:06.177779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.875 [2024-04-17 14:39:06.321477] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.875 [2024-04-17 14:39:06.377967] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.875 [2024-04-17 14:39:06.378026] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.875 [2024-04-17 14:39:06.378037] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.875 [2024-04-17 14:39:06.378046] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.875 [2024-04-17 14:39:06.378053] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.875 [2024-04-17 14:39:06.378084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.811 14:39:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:58.811 14:39:07 -- common/autotest_common.sh@850 -- # return 0 00:19:58.811 14:39:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:58.811 14:39:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:58.811 14:39:07 -- common/autotest_common.sh@10 -- # set +x 00:19:58.811 14:39:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.811 14:39:07 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:58.811 14:39:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.811 14:39:07 -- common/autotest_common.sh@10 -- # set +x 00:19:58.811 [2024-04-17 14:39:07.305167] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.811 [2024-04-17 14:39:07.313297] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:58.811 null0 00:19:58.811 [2024-04-17 14:39:07.345264] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.811 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:58.811 14:39:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.811 14:39:07 -- host/discovery_remove_ifc.sh@59 -- # hostpid=73319 00:19:58.811 14:39:07 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 73319 /tmp/host.sock 00:19:58.811 14:39:07 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:58.811 14:39:07 -- common/autotest_common.sh@817 -- # '[' -z 73319 ']' 00:19:58.811 14:39:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:58.811 14:39:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.811 14:39:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:58.811 14:39:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.811 14:39:07 -- common/autotest_common.sh@10 -- # set +x 00:19:59.069 [2024-04-17 14:39:07.441207] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:59.069 [2024-04-17 14:39:07.441354] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73319 ] 00:19:59.069 [2024-04-17 14:39:07.585812] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.069 [2024-04-17 14:39:07.651426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.005 14:39:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:00.005 14:39:08 -- common/autotest_common.sh@850 -- # return 0 00:20:00.005 14:39:08 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.005 14:39:08 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:00.005 14:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.005 14:39:08 -- common/autotest_common.sh@10 -- # set +x 00:20:00.005 14:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.005 14:39:08 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:00.005 14:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.005 14:39:08 -- common/autotest_common.sh@10 -- # set +x 00:20:00.005 14:39:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.005 14:39:08 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:00.005 14:39:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.005 14:39:08 -- common/autotest_common.sh@10 -- # set +x 00:20:00.939 [2024-04-17 14:39:09.429311] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:00.939 [2024-04-17 14:39:09.429353] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:00.939 [2024-04-17 14:39:09.429373] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:00.939 [2024-04-17 14:39:09.435369] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:00.939 [2024-04-17 14:39:09.493009] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:00.939 [2024-04-17 14:39:09.493098] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:00.939 [2024-04-17 14:39:09.493128] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:00.939 [2024-04-17 14:39:09.493148] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:00.939 [2024-04-17 14:39:09.493175] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:00.939 14:39:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.939 14:39:09 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:00.939 14:39:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:00.939 14:39:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:00.939 14:39:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.939 14:39:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:00.939 14:39:09 -- common/autotest_common.sh@10 -- # set +x 00:20:00.939 14:39:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:00.939 [2024-04-17 14:39:09.498184] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x135f090 was disconnected and freed. delete nvme_qpair. 00:20:00.939 14:39:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:00.939 14:39:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.198 14:39:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:01.198 14:39:09 -- common/autotest_common.sh@10 -- # set +x 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:01.198 14:39:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:01.198 14:39:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:02.133 14:39:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:02.133 14:39:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.133 14:39:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.133 14:39:10 -- common/autotest_common.sh@10 -- # set +x 00:20:02.133 14:39:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:02.133 14:39:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:02.133 14:39:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:02.133 14:39:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.133 14:39:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:02.133 14:39:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:03.510 14:39:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:03.510 14:39:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:03.510 14:39:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:03.510 14:39:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:03.510 14:39:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.510 14:39:11 -- common/autotest_common.sh@10 -- # set +x 00:20:03.510 14:39:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:03.510 14:39:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.510 14:39:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:03.510 14:39:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:04.445 14:39:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:04.445 14:39:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.445 14:39:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:04.445 14:39:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.445 14:39:12 -- common/autotest_common.sh@10 -- # set +x 00:20:04.445 14:39:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:04.445 14:39:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:04.445 14:39:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.445 14:39:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:04.445 14:39:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:05.380 14:39:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:05.380 14:39:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:05.380 14:39:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:05.380 14:39:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.380 14:39:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.380 14:39:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:05.380 14:39:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:05.380 14:39:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.380 14:39:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:05.380 14:39:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:06.314 14:39:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:06.314 14:39:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.314 14:39:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:06.314 14:39:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.314 14:39:14 -- common/autotest_common.sh@10 -- # set +x 00:20:06.314 14:39:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:06.314 14:39:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:06.314 14:39:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.571 14:39:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:06.571 14:39:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:06.571 [2024-04-17 14:39:14.930730] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:06.571 [2024-04-17 14:39:14.930801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.571 [2024-04-17 14:39:14.930818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.571 [2024-04-17 14:39:14.930831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.572 [2024-04-17 14:39:14.930841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.572 [2024-04-17 14:39:14.930851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.572 [2024-04-17 14:39:14.930860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.572 [2024-04-17 14:39:14.930870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.572 [2024-04-17 14:39:14.930880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.572 [2024-04-17 14:39:14.930890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.572 [2024-04-17 14:39:14.930900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.572 [2024-04-17 14:39:14.930909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cdf70 is same with the state(5) to be set 00:20:06.572 [2024-04-17 14:39:14.940734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cdf70 (9): Bad file descriptor 00:20:06.572 [2024-04-17 14:39:14.950771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.595 14:39:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:07.595 14:39:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:07.595 14:39:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:07.595 14:39:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:07.595 14:39:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.595 14:39:15 -- common/autotest_common.sh@10 -- # set +x 00:20:07.595 14:39:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:07.595 [2024-04-17 14:39:16.015026] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:08.530 [2024-04-17 14:39:17.039029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:09.466 [2024-04-17 14:39:18.063019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:09.466 [2024-04-17 14:39:18.063126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cdf70 with addr=10.0.0.2, port=4420 00:20:09.466 [2024-04-17 14:39:18.063152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cdf70 is same with the state(5) to be set 00:20:09.466 [2024-04-17 14:39:18.063715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cdf70 (9): Bad file descriptor 00:20:09.466 [2024-04-17 14:39:18.063759] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.466 [2024-04-17 14:39:18.063795] bdev_nvme.c:6649:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:09.466 [2024-04-17 14:39:18.063848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.466 [2024-04-17 14:39:18.063868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.466 [2024-04-17 14:39:18.063885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.466 [2024-04-17 14:39:18.063898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.466 [2024-04-17 14:39:18.063912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.466 [2024-04-17 14:39:18.063925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.466 [2024-04-17 14:39:18.063939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.466 [2024-04-17 14:39:18.063951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.466 [2024-04-17 14:39:18.064005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.466 [2024-04-17 14:39:18.064021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.466 [2024-04-17 14:39:18.064034] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:09.466 [2024-04-17 14:39:18.064194] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cd830 (9): Bad file descriptor 00:20:09.466 [2024-04-17 14:39:18.065218] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:09.466 [2024-04-17 14:39:18.065263] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:09.724 14:39:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.724 14:39:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:09.724 14:39:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.658 14:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.658 14:39:19 -- common/autotest_common.sh@10 -- # set +x 00:20:10.658 14:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.658 14:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.658 14:39:19 -- common/autotest_common.sh@10 -- # set +x 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.658 14:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:10.658 14:39:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.592 [2024-04-17 14:39:20.075504] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:11.592 [2024-04-17 14:39:20.075549] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:11.592 [2024-04-17 14:39:20.075570] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:11.592 [2024-04-17 14:39:20.081550] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:11.592 [2024-04-17 14:39:20.136839] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:11.592 [2024-04-17 14:39:20.136908] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:11.592 [2024-04-17 14:39:20.136933] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:11.592 [2024-04-17 14:39:20.136976] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:11.592 [2024-04-17 14:39:20.136989] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:11.592 [2024-04-17 14:39:20.144306] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x136c5a0 was disconnected and freed. delete nvme_qpair. 00:20:11.851 14:39:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:11.851 14:39:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.851 14:39:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:11.851 14:39:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.851 14:39:20 -- common/autotest_common.sh@10 -- # set +x 00:20:11.851 14:39:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:11.851 14:39:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:11.851 14:39:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.851 14:39:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:11.851 14:39:20 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:11.851 14:39:20 -- host/discovery_remove_ifc.sh@90 -- # killprocess 73319 00:20:11.851 14:39:20 -- common/autotest_common.sh@936 -- # '[' -z 73319 ']' 00:20:11.851 14:39:20 -- common/autotest_common.sh@940 -- # kill -0 73319 00:20:11.851 14:39:20 -- common/autotest_common.sh@941 -- # uname 00:20:11.851 14:39:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:11.851 14:39:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73319 00:20:11.851 14:39:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:11.851 14:39:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:11.851 killing process with pid 73319 00:20:11.851 14:39:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73319' 00:20:11.851 14:39:20 -- common/autotest_common.sh@955 -- # kill 73319 00:20:11.851 14:39:20 -- common/autotest_common.sh@960 -- # wait 73319 00:20:12.110 14:39:20 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:12.110 14:39:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:12.110 14:39:20 -- nvmf/common.sh@117 -- # sync 00:20:12.110 14:39:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.110 14:39:20 -- nvmf/common.sh@120 -- # set +e 00:20:12.110 14:39:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.110 14:39:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.110 rmmod nvme_tcp 00:20:12.110 rmmod nvme_fabrics 00:20:12.110 rmmod nvme_keyring 00:20:12.110 14:39:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.110 14:39:20 -- nvmf/common.sh@124 -- # set -e 00:20:12.110 14:39:20 -- nvmf/common.sh@125 -- # return 0 00:20:12.110 14:39:20 -- nvmf/common.sh@478 -- # '[' -n 73279 ']' 00:20:12.110 14:39:20 -- nvmf/common.sh@479 -- # killprocess 73279 00:20:12.110 14:39:20 -- common/autotest_common.sh@936 -- # '[' -z 73279 ']' 00:20:12.110 14:39:20 -- common/autotest_common.sh@940 -- # kill -0 73279 00:20:12.110 14:39:20 -- common/autotest_common.sh@941 -- # uname 00:20:12.110 14:39:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.110 14:39:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73279 00:20:12.110 killing process with pid 73279 00:20:12.110 14:39:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:12.110 14:39:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:12.110 14:39:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73279' 00:20:12.110 14:39:20 -- common/autotest_common.sh@955 -- # kill 73279 00:20:12.110 14:39:20 -- common/autotest_common.sh@960 -- # wait 73279 00:20:12.368 14:39:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:12.368 14:39:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:12.368 14:39:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:12.368 14:39:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.368 14:39:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.368 14:39:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.368 14:39:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.368 14:39:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.368 14:39:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:12.368 ************************************ 00:20:12.368 END TEST nvmf_discovery_remove_ifc 00:20:12.368 ************************************ 00:20:12.368 00:20:12.368 real 0m15.154s 00:20:12.368 user 0m24.341s 00:20:12.368 sys 0m2.533s 00:20:12.368 14:39:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:12.368 14:39:20 -- common/autotest_common.sh@10 -- # set +x 00:20:12.368 14:39:20 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:12.368 14:39:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:12.368 14:39:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.368 14:39:20 -- common/autotest_common.sh@10 -- # set +x 00:20:12.627 ************************************ 00:20:12.627 START TEST nvmf_identify_kernel_target 00:20:12.627 ************************************ 00:20:12.627 14:39:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:12.627 * Looking for test storage... 00:20:12.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:12.627 14:39:21 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.627 14:39:21 -- nvmf/common.sh@7 -- # uname -s 00:20:12.627 14:39:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.627 14:39:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.627 14:39:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.627 14:39:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.627 14:39:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.627 14:39:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.627 14:39:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.627 14:39:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.627 14:39:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.627 14:39:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.627 14:39:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:20:12.627 14:39:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:20:12.627 14:39:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.627 14:39:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.627 14:39:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:12.627 14:39:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.627 14:39:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.627 14:39:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.627 14:39:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.627 14:39:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.627 14:39:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.627 14:39:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.627 14:39:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.627 14:39:21 -- paths/export.sh@5 -- # export PATH 00:20:12.627 14:39:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.627 14:39:21 -- nvmf/common.sh@47 -- # : 0 00:20:12.627 14:39:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.627 14:39:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.627 14:39:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.627 14:39:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.627 14:39:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.627 14:39:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.627 14:39:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.627 14:39:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.627 14:39:21 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:12.627 14:39:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:12.627 14:39:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.627 14:39:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:12.627 14:39:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:12.627 14:39:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:12.627 14:39:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.627 14:39:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.627 14:39:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.627 14:39:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:12.627 14:39:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:12.627 14:39:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:12.627 14:39:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:12.627 14:39:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:12.627 14:39:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:12.627 14:39:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.627 14:39:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.627 14:39:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:12.627 14:39:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:12.627 14:39:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:12.627 14:39:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:12.627 14:39:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:12.627 14:39:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.627 14:39:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:12.627 14:39:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:12.627 14:39:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:12.627 14:39:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:12.627 14:39:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:12.627 14:39:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:12.627 Cannot find device "nvmf_tgt_br" 00:20:12.627 14:39:21 -- nvmf/common.sh@155 -- # true 00:20:12.627 14:39:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.627 Cannot find device "nvmf_tgt_br2" 00:20:12.627 14:39:21 -- nvmf/common.sh@156 -- # true 00:20:12.627 14:39:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:12.627 14:39:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:12.627 Cannot find device "nvmf_tgt_br" 00:20:12.627 14:39:21 -- nvmf/common.sh@158 -- # true 00:20:12.627 14:39:21 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:12.627 Cannot find device "nvmf_tgt_br2" 00:20:12.627 14:39:21 -- nvmf/common.sh@159 -- # true 00:20:12.627 14:39:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:12.627 14:39:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:12.628 14:39:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.628 14:39:21 -- nvmf/common.sh@162 -- # true 00:20:12.628 14:39:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.628 14:39:21 -- nvmf/common.sh@163 -- # true 00:20:12.628 14:39:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:12.628 14:39:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:12.628 14:39:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:12.628 14:39:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:12.628 14:39:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:12.886 14:39:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:12.886 14:39:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:12.886 14:39:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:12.886 14:39:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:12.886 14:39:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:12.886 14:39:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:12.886 14:39:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:12.886 14:39:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:12.886 14:39:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:12.886 14:39:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:12.886 14:39:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:12.886 14:39:21 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:12.886 14:39:21 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:12.886 14:39:21 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:12.886 14:39:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:12.886 14:39:21 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:12.886 14:39:21 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:12.886 14:39:21 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:12.886 14:39:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:12.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:20:12.886 00:20:12.886 --- 10.0.0.2 ping statistics --- 00:20:12.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.886 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:12.886 14:39:21 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:12.886 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:12.886 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:20:12.886 00:20:12.886 --- 10.0.0.3 ping statistics --- 00:20:12.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.886 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:12.886 14:39:21 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:12.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:12.886 00:20:12.886 --- 10.0.0.1 ping statistics --- 00:20:12.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.886 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:12.886 14:39:21 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.886 14:39:21 -- nvmf/common.sh@422 -- # return 0 00:20:12.886 14:39:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:12.886 14:39:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.886 14:39:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:12.886 14:39:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:12.886 14:39:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.886 14:39:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:12.886 14:39:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:12.886 14:39:21 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:12.886 14:39:21 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:12.886 14:39:21 -- nvmf/common.sh@717 -- # local ip 00:20:12.886 14:39:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.886 14:39:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.886 14:39:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.886 14:39:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.886 14:39:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:12.886 14:39:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.886 14:39:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:12.886 14:39:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:12.886 14:39:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:12.886 14:39:21 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:12.886 14:39:21 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:12.886 14:39:21 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:12.886 14:39:21 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:12.886 14:39:21 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:12.886 14:39:21 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:12.887 14:39:21 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:12.887 14:39:21 -- nvmf/common.sh@628 -- # local block nvme 00:20:12.887 14:39:21 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:12.887 14:39:21 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:12.887 14:39:21 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:12.887 14:39:21 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:13.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:13.145 Waiting for block devices as requested 00:20:13.404 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:13.404 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:13.404 14:39:21 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:13.404 14:39:21 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:13.404 14:39:21 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:13.404 14:39:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:13.404 14:39:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:13.404 14:39:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:13.404 14:39:21 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:13.404 14:39:21 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:13.404 14:39:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:13.663 No valid GPT data, bailing 00:20:13.663 14:39:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:13.663 14:39:22 -- scripts/common.sh@391 -- # pt= 00:20:13.663 14:39:22 -- scripts/common.sh@392 -- # return 1 00:20:13.663 14:39:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:13.663 14:39:22 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:13.663 14:39:22 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:13.663 14:39:22 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:20:13.663 14:39:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:13.663 14:39:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:13.663 14:39:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:13.663 14:39:22 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:20:13.663 14:39:22 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:13.663 14:39:22 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:13.663 No valid GPT data, bailing 00:20:13.663 14:39:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:13.663 14:39:22 -- scripts/common.sh@391 -- # pt= 00:20:13.663 14:39:22 -- scripts/common.sh@392 -- # return 1 00:20:13.663 14:39:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:20:13.663 14:39:22 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:13.663 14:39:22 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:13.663 14:39:22 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:20:13.663 14:39:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:13.663 14:39:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:13.663 14:39:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:13.663 14:39:22 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:20:13.663 14:39:22 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:13.663 14:39:22 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:13.663 No valid GPT data, bailing 00:20:13.663 14:39:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:13.663 14:39:22 -- scripts/common.sh@391 -- # pt= 00:20:13.663 14:39:22 -- scripts/common.sh@392 -- # return 1 00:20:13.663 14:39:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:20:13.663 14:39:22 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:13.663 14:39:22 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:13.663 14:39:22 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:20:13.663 14:39:22 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:13.663 14:39:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:13.663 14:39:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:13.663 14:39:22 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:20:13.663 14:39:22 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:13.663 14:39:22 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:13.663 No valid GPT data, bailing 00:20:13.663 14:39:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:13.663 14:39:22 -- scripts/common.sh@391 -- # pt= 00:20:13.663 14:39:22 -- scripts/common.sh@392 -- # return 1 00:20:13.663 14:39:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:20:13.663 14:39:22 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:20:13.663 14:39:22 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:13.923 14:39:22 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:13.923 14:39:22 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:13.923 14:39:22 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:13.923 14:39:22 -- nvmf/common.sh@656 -- # echo 1 00:20:13.923 14:39:22 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:20:13.923 14:39:22 -- nvmf/common.sh@658 -- # echo 1 00:20:13.923 14:39:22 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:20:13.923 14:39:22 -- nvmf/common.sh@661 -- # echo tcp 00:20:13.923 14:39:22 -- nvmf/common.sh@662 -- # echo 4420 00:20:13.923 14:39:22 -- nvmf/common.sh@663 -- # echo ipv4 00:20:13.923 14:39:22 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:13.923 14:39:22 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 --hostid=c475d660-18c3-4238-bb35-f293e0cc1403 -a 10.0.0.1 -t tcp -s 4420 00:20:13.923 00:20:13.923 Discovery Log Number of Records 2, Generation counter 2 00:20:13.923 =====Discovery Log Entry 0====== 00:20:13.923 trtype: tcp 00:20:13.923 adrfam: ipv4 00:20:13.923 subtype: current discovery subsystem 00:20:13.923 treq: not specified, sq flow control disable supported 00:20:13.923 portid: 1 00:20:13.923 trsvcid: 4420 00:20:13.923 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:13.923 traddr: 10.0.0.1 00:20:13.923 eflags: none 00:20:13.923 sectype: none 00:20:13.923 =====Discovery Log Entry 1====== 00:20:13.923 trtype: tcp 00:20:13.923 adrfam: ipv4 00:20:13.923 subtype: nvme subsystem 00:20:13.923 treq: not specified, sq flow control disable supported 00:20:13.923 portid: 1 00:20:13.923 trsvcid: 4420 00:20:13.923 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:13.923 traddr: 10.0.0.1 00:20:13.923 eflags: none 00:20:13.923 sectype: none 00:20:13.923 14:39:22 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:13.923 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:13.923 ===================================================== 00:20:13.923 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:13.923 ===================================================== 00:20:13.923 Controller Capabilities/Features 00:20:13.923 ================================ 00:20:13.923 Vendor ID: 0000 00:20:13.923 Subsystem Vendor ID: 0000 00:20:13.923 Serial Number: a65dd430a5fb3875b4a6 00:20:13.923 Model Number: Linux 00:20:13.923 Firmware Version: 6.7.0-68 00:20:13.923 Recommended Arb Burst: 0 00:20:13.923 IEEE OUI Identifier: 00 00 00 00:20:13.923 Multi-path I/O 00:20:13.923 May have multiple subsystem ports: No 00:20:13.923 May have multiple controllers: No 00:20:13.923 Associated with SR-IOV VF: No 00:20:13.923 Max Data Transfer Size: Unlimited 00:20:13.923 Max Number of Namespaces: 0 00:20:13.923 Max Number of I/O Queues: 1024 00:20:13.923 NVMe Specification Version (VS): 1.3 00:20:13.923 NVMe Specification Version (Identify): 1.3 00:20:13.923 Maximum Queue Entries: 1024 00:20:13.923 Contiguous Queues Required: No 00:20:13.923 Arbitration Mechanisms Supported 00:20:13.923 Weighted Round Robin: Not Supported 00:20:13.923 Vendor Specific: Not Supported 00:20:13.923 Reset Timeout: 7500 ms 00:20:13.923 Doorbell Stride: 4 bytes 00:20:13.923 NVM Subsystem Reset: Not Supported 00:20:13.923 Command Sets Supported 00:20:13.923 NVM Command Set: Supported 00:20:13.923 Boot Partition: Not Supported 00:20:13.923 Memory Page Size Minimum: 4096 bytes 00:20:13.923 Memory Page Size Maximum: 4096 bytes 00:20:13.923 Persistent Memory Region: Not Supported 00:20:13.923 Optional Asynchronous Events Supported 00:20:13.923 Namespace Attribute Notices: Not Supported 00:20:13.923 Firmware Activation Notices: Not Supported 00:20:13.923 ANA Change Notices: Not Supported 00:20:13.923 PLE Aggregate Log Change Notices: Not Supported 00:20:13.923 LBA Status Info Alert Notices: Not Supported 00:20:13.923 EGE Aggregate Log Change Notices: Not Supported 00:20:13.923 Normal NVM Subsystem Shutdown event: Not Supported 00:20:13.923 Zone Descriptor Change Notices: Not Supported 00:20:13.923 Discovery Log Change Notices: Supported 00:20:13.923 Controller Attributes 00:20:13.923 128-bit Host Identifier: Not Supported 00:20:13.923 Non-Operational Permissive Mode: Not Supported 00:20:13.923 NVM Sets: Not Supported 00:20:13.923 Read Recovery Levels: Not Supported 00:20:13.923 Endurance Groups: Not Supported 00:20:13.923 Predictable Latency Mode: Not Supported 00:20:13.923 Traffic Based Keep ALive: Not Supported 00:20:13.923 Namespace Granularity: Not Supported 00:20:13.923 SQ Associations: Not Supported 00:20:13.923 UUID List: Not Supported 00:20:13.923 Multi-Domain Subsystem: Not Supported 00:20:13.923 Fixed Capacity Management: Not Supported 00:20:13.923 Variable Capacity Management: Not Supported 00:20:13.923 Delete Endurance Group: Not Supported 00:20:13.923 Delete NVM Set: Not Supported 00:20:13.923 Extended LBA Formats Supported: Not Supported 00:20:13.923 Flexible Data Placement Supported: Not Supported 00:20:13.923 00:20:13.923 Controller Memory Buffer Support 00:20:13.923 ================================ 00:20:13.923 Supported: No 00:20:13.923 00:20:13.923 Persistent Memory Region Support 00:20:13.923 ================================ 00:20:13.923 Supported: No 00:20:13.923 00:20:13.923 Admin Command Set Attributes 00:20:13.923 ============================ 00:20:13.923 Security Send/Receive: Not Supported 00:20:13.923 Format NVM: Not Supported 00:20:13.923 Firmware Activate/Download: Not Supported 00:20:13.923 Namespace Management: Not Supported 00:20:13.923 Device Self-Test: Not Supported 00:20:13.923 Directives: Not Supported 00:20:13.923 NVMe-MI: Not Supported 00:20:13.923 Virtualization Management: Not Supported 00:20:13.923 Doorbell Buffer Config: Not Supported 00:20:13.923 Get LBA Status Capability: Not Supported 00:20:13.923 Command & Feature Lockdown Capability: Not Supported 00:20:13.923 Abort Command Limit: 1 00:20:13.923 Async Event Request Limit: 1 00:20:13.923 Number of Firmware Slots: N/A 00:20:13.923 Firmware Slot 1 Read-Only: N/A 00:20:13.923 Firmware Activation Without Reset: N/A 00:20:13.923 Multiple Update Detection Support: N/A 00:20:13.923 Firmware Update Granularity: No Information Provided 00:20:13.923 Per-Namespace SMART Log: No 00:20:13.923 Asymmetric Namespace Access Log Page: Not Supported 00:20:13.923 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:13.923 Command Effects Log Page: Not Supported 00:20:13.923 Get Log Page Extended Data: Supported 00:20:13.923 Telemetry Log Pages: Not Supported 00:20:13.923 Persistent Event Log Pages: Not Supported 00:20:13.923 Supported Log Pages Log Page: May Support 00:20:13.923 Commands Supported & Effects Log Page: Not Supported 00:20:13.923 Feature Identifiers & Effects Log Page:May Support 00:20:13.923 NVMe-MI Commands & Effects Log Page: May Support 00:20:13.923 Data Area 4 for Telemetry Log: Not Supported 00:20:13.923 Error Log Page Entries Supported: 1 00:20:13.923 Keep Alive: Not Supported 00:20:13.923 00:20:13.923 NVM Command Set Attributes 00:20:13.923 ========================== 00:20:13.923 Submission Queue Entry Size 00:20:13.923 Max: 1 00:20:13.923 Min: 1 00:20:13.923 Completion Queue Entry Size 00:20:13.923 Max: 1 00:20:13.923 Min: 1 00:20:13.923 Number of Namespaces: 0 00:20:13.923 Compare Command: Not Supported 00:20:13.923 Write Uncorrectable Command: Not Supported 00:20:13.923 Dataset Management Command: Not Supported 00:20:13.923 Write Zeroes Command: Not Supported 00:20:13.923 Set Features Save Field: Not Supported 00:20:13.923 Reservations: Not Supported 00:20:13.923 Timestamp: Not Supported 00:20:13.923 Copy: Not Supported 00:20:13.923 Volatile Write Cache: Not Present 00:20:13.923 Atomic Write Unit (Normal): 1 00:20:13.923 Atomic Write Unit (PFail): 1 00:20:13.923 Atomic Compare & Write Unit: 1 00:20:13.923 Fused Compare & Write: Not Supported 00:20:13.923 Scatter-Gather List 00:20:13.923 SGL Command Set: Supported 00:20:13.923 SGL Keyed: Not Supported 00:20:13.924 SGL Bit Bucket Descriptor: Not Supported 00:20:13.924 SGL Metadata Pointer: Not Supported 00:20:13.924 Oversized SGL: Not Supported 00:20:13.924 SGL Metadata Address: Not Supported 00:20:13.924 SGL Offset: Supported 00:20:13.924 Transport SGL Data Block: Not Supported 00:20:13.924 Replay Protected Memory Block: Not Supported 00:20:13.924 00:20:13.924 Firmware Slot Information 00:20:13.924 ========================= 00:20:13.924 Active slot: 0 00:20:13.924 00:20:13.924 00:20:13.924 Error Log 00:20:13.924 ========= 00:20:13.924 00:20:13.924 Active Namespaces 00:20:13.924 ================= 00:20:13.924 Discovery Log Page 00:20:13.924 ================== 00:20:13.924 Generation Counter: 2 00:20:13.924 Number of Records: 2 00:20:13.924 Record Format: 0 00:20:13.924 00:20:13.924 Discovery Log Entry 0 00:20:13.924 ---------------------- 00:20:13.924 Transport Type: 3 (TCP) 00:20:13.924 Address Family: 1 (IPv4) 00:20:13.924 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:13.924 Entry Flags: 00:20:13.924 Duplicate Returned Information: 0 00:20:13.924 Explicit Persistent Connection Support for Discovery: 0 00:20:13.924 Transport Requirements: 00:20:13.924 Secure Channel: Not Specified 00:20:13.924 Port ID: 1 (0x0001) 00:20:13.924 Controller ID: 65535 (0xffff) 00:20:13.924 Admin Max SQ Size: 32 00:20:13.924 Transport Service Identifier: 4420 00:20:13.924 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:13.924 Transport Address: 10.0.0.1 00:20:13.924 Discovery Log Entry 1 00:20:13.924 ---------------------- 00:20:13.924 Transport Type: 3 (TCP) 00:20:13.924 Address Family: 1 (IPv4) 00:20:13.924 Subsystem Type: 2 (NVM Subsystem) 00:20:13.924 Entry Flags: 00:20:13.924 Duplicate Returned Information: 0 00:20:13.924 Explicit Persistent Connection Support for Discovery: 0 00:20:13.924 Transport Requirements: 00:20:13.924 Secure Channel: Not Specified 00:20:13.924 Port ID: 1 (0x0001) 00:20:13.924 Controller ID: 65535 (0xffff) 00:20:13.924 Admin Max SQ Size: 32 00:20:13.924 Transport Service Identifier: 4420 00:20:13.924 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:13.924 Transport Address: 10.0.0.1 00:20:13.924 14:39:22 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:14.183 get_feature(0x01) failed 00:20:14.183 get_feature(0x02) failed 00:20:14.183 get_feature(0x04) failed 00:20:14.183 ===================================================== 00:20:14.183 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:14.183 ===================================================== 00:20:14.183 Controller Capabilities/Features 00:20:14.183 ================================ 00:20:14.183 Vendor ID: 0000 00:20:14.183 Subsystem Vendor ID: 0000 00:20:14.183 Serial Number: 352fb3eb3f072e3f04c8 00:20:14.183 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:14.183 Firmware Version: 6.7.0-68 00:20:14.183 Recommended Arb Burst: 6 00:20:14.183 IEEE OUI Identifier: 00 00 00 00:20:14.183 Multi-path I/O 00:20:14.183 May have multiple subsystem ports: Yes 00:20:14.183 May have multiple controllers: Yes 00:20:14.183 Associated with SR-IOV VF: No 00:20:14.183 Max Data Transfer Size: Unlimited 00:20:14.183 Max Number of Namespaces: 1024 00:20:14.183 Max Number of I/O Queues: 128 00:20:14.183 NVMe Specification Version (VS): 1.3 00:20:14.183 NVMe Specification Version (Identify): 1.3 00:20:14.183 Maximum Queue Entries: 1024 00:20:14.183 Contiguous Queues Required: No 00:20:14.183 Arbitration Mechanisms Supported 00:20:14.183 Weighted Round Robin: Not Supported 00:20:14.183 Vendor Specific: Not Supported 00:20:14.183 Reset Timeout: 7500 ms 00:20:14.183 Doorbell Stride: 4 bytes 00:20:14.183 NVM Subsystem Reset: Not Supported 00:20:14.183 Command Sets Supported 00:20:14.183 NVM Command Set: Supported 00:20:14.183 Boot Partition: Not Supported 00:20:14.183 Memory Page Size Minimum: 4096 bytes 00:20:14.183 Memory Page Size Maximum: 4096 bytes 00:20:14.183 Persistent Memory Region: Not Supported 00:20:14.183 Optional Asynchronous Events Supported 00:20:14.183 Namespace Attribute Notices: Supported 00:20:14.183 Firmware Activation Notices: Not Supported 00:20:14.183 ANA Change Notices: Supported 00:20:14.183 PLE Aggregate Log Change Notices: Not Supported 00:20:14.183 LBA Status Info Alert Notices: Not Supported 00:20:14.183 EGE Aggregate Log Change Notices: Not Supported 00:20:14.183 Normal NVM Subsystem Shutdown event: Not Supported 00:20:14.183 Zone Descriptor Change Notices: Not Supported 00:20:14.183 Discovery Log Change Notices: Not Supported 00:20:14.183 Controller Attributes 00:20:14.183 128-bit Host Identifier: Supported 00:20:14.183 Non-Operational Permissive Mode: Not Supported 00:20:14.183 NVM Sets: Not Supported 00:20:14.183 Read Recovery Levels: Not Supported 00:20:14.183 Endurance Groups: Not Supported 00:20:14.183 Predictable Latency Mode: Not Supported 00:20:14.183 Traffic Based Keep ALive: Supported 00:20:14.183 Namespace Granularity: Not Supported 00:20:14.183 SQ Associations: Not Supported 00:20:14.183 UUID List: Not Supported 00:20:14.183 Multi-Domain Subsystem: Not Supported 00:20:14.183 Fixed Capacity Management: Not Supported 00:20:14.183 Variable Capacity Management: Not Supported 00:20:14.183 Delete Endurance Group: Not Supported 00:20:14.183 Delete NVM Set: Not Supported 00:20:14.183 Extended LBA Formats Supported: Not Supported 00:20:14.183 Flexible Data Placement Supported: Not Supported 00:20:14.183 00:20:14.183 Controller Memory Buffer Support 00:20:14.183 ================================ 00:20:14.183 Supported: No 00:20:14.183 00:20:14.183 Persistent Memory Region Support 00:20:14.183 ================================ 00:20:14.183 Supported: No 00:20:14.183 00:20:14.183 Admin Command Set Attributes 00:20:14.183 ============================ 00:20:14.183 Security Send/Receive: Not Supported 00:20:14.183 Format NVM: Not Supported 00:20:14.183 Firmware Activate/Download: Not Supported 00:20:14.183 Namespace Management: Not Supported 00:20:14.183 Device Self-Test: Not Supported 00:20:14.183 Directives: Not Supported 00:20:14.183 NVMe-MI: Not Supported 00:20:14.183 Virtualization Management: Not Supported 00:20:14.183 Doorbell Buffer Config: Not Supported 00:20:14.183 Get LBA Status Capability: Not Supported 00:20:14.183 Command & Feature Lockdown Capability: Not Supported 00:20:14.183 Abort Command Limit: 4 00:20:14.183 Async Event Request Limit: 4 00:20:14.183 Number of Firmware Slots: N/A 00:20:14.183 Firmware Slot 1 Read-Only: N/A 00:20:14.183 Firmware Activation Without Reset: N/A 00:20:14.183 Multiple Update Detection Support: N/A 00:20:14.183 Firmware Update Granularity: No Information Provided 00:20:14.183 Per-Namespace SMART Log: Yes 00:20:14.183 Asymmetric Namespace Access Log Page: Supported 00:20:14.183 ANA Transition Time : 10 sec 00:20:14.183 00:20:14.183 Asymmetric Namespace Access Capabilities 00:20:14.183 ANA Optimized State : Supported 00:20:14.183 ANA Non-Optimized State : Supported 00:20:14.183 ANA Inaccessible State : Supported 00:20:14.183 ANA Persistent Loss State : Supported 00:20:14.183 ANA Change State : Supported 00:20:14.183 ANAGRPID is not changed : No 00:20:14.183 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:14.183 00:20:14.183 ANA Group Identifier Maximum : 128 00:20:14.183 Number of ANA Group Identifiers : 128 00:20:14.183 Max Number of Allowed Namespaces : 1024 00:20:14.183 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:14.183 Command Effects Log Page: Supported 00:20:14.183 Get Log Page Extended Data: Supported 00:20:14.183 Telemetry Log Pages: Not Supported 00:20:14.183 Persistent Event Log Pages: Not Supported 00:20:14.183 Supported Log Pages Log Page: May Support 00:20:14.183 Commands Supported & Effects Log Page: Not Supported 00:20:14.183 Feature Identifiers & Effects Log Page:May Support 00:20:14.183 NVMe-MI Commands & Effects Log Page: May Support 00:20:14.183 Data Area 4 for Telemetry Log: Not Supported 00:20:14.183 Error Log Page Entries Supported: 128 00:20:14.183 Keep Alive: Supported 00:20:14.183 Keep Alive Granularity: 1000 ms 00:20:14.183 00:20:14.183 NVM Command Set Attributes 00:20:14.183 ========================== 00:20:14.183 Submission Queue Entry Size 00:20:14.183 Max: 64 00:20:14.183 Min: 64 00:20:14.183 Completion Queue Entry Size 00:20:14.183 Max: 16 00:20:14.183 Min: 16 00:20:14.183 Number of Namespaces: 1024 00:20:14.183 Compare Command: Not Supported 00:20:14.183 Write Uncorrectable Command: Not Supported 00:20:14.183 Dataset Management Command: Supported 00:20:14.183 Write Zeroes Command: Supported 00:20:14.183 Set Features Save Field: Not Supported 00:20:14.183 Reservations: Not Supported 00:20:14.183 Timestamp: Not Supported 00:20:14.183 Copy: Not Supported 00:20:14.183 Volatile Write Cache: Present 00:20:14.183 Atomic Write Unit (Normal): 1 00:20:14.183 Atomic Write Unit (PFail): 1 00:20:14.183 Atomic Compare & Write Unit: 1 00:20:14.183 Fused Compare & Write: Not Supported 00:20:14.183 Scatter-Gather List 00:20:14.183 SGL Command Set: Supported 00:20:14.183 SGL Keyed: Not Supported 00:20:14.183 SGL Bit Bucket Descriptor: Not Supported 00:20:14.183 SGL Metadata Pointer: Not Supported 00:20:14.183 Oversized SGL: Not Supported 00:20:14.183 SGL Metadata Address: Not Supported 00:20:14.183 SGL Offset: Supported 00:20:14.183 Transport SGL Data Block: Not Supported 00:20:14.183 Replay Protected Memory Block: Not Supported 00:20:14.183 00:20:14.183 Firmware Slot Information 00:20:14.183 ========================= 00:20:14.184 Active slot: 0 00:20:14.184 00:20:14.184 Asymmetric Namespace Access 00:20:14.184 =========================== 00:20:14.184 Change Count : 0 00:20:14.184 Number of ANA Group Descriptors : 1 00:20:14.184 ANA Group Descriptor : 0 00:20:14.184 ANA Group ID : 1 00:20:14.184 Number of NSID Values : 1 00:20:14.184 Change Count : 0 00:20:14.184 ANA State : 1 00:20:14.184 Namespace Identifier : 1 00:20:14.184 00:20:14.184 Commands Supported and Effects 00:20:14.184 ============================== 00:20:14.184 Admin Commands 00:20:14.184 -------------- 00:20:14.184 Get Log Page (02h): Supported 00:20:14.184 Identify (06h): Supported 00:20:14.184 Abort (08h): Supported 00:20:14.184 Set Features (09h): Supported 00:20:14.184 Get Features (0Ah): Supported 00:20:14.184 Asynchronous Event Request (0Ch): Supported 00:20:14.184 Keep Alive (18h): Supported 00:20:14.184 I/O Commands 00:20:14.184 ------------ 00:20:14.184 Flush (00h): Supported 00:20:14.184 Write (01h): Supported LBA-Change 00:20:14.184 Read (02h): Supported 00:20:14.184 Write Zeroes (08h): Supported LBA-Change 00:20:14.184 Dataset Management (09h): Supported 00:20:14.184 00:20:14.184 Error Log 00:20:14.184 ========= 00:20:14.184 Entry: 0 00:20:14.184 Error Count: 0x3 00:20:14.184 Submission Queue Id: 0x0 00:20:14.184 Command Id: 0x5 00:20:14.184 Phase Bit: 0 00:20:14.184 Status Code: 0x2 00:20:14.184 Status Code Type: 0x0 00:20:14.184 Do Not Retry: 1 00:20:14.184 Error Location: 0x28 00:20:14.184 LBA: 0x0 00:20:14.184 Namespace: 0x0 00:20:14.184 Vendor Log Page: 0x0 00:20:14.184 ----------- 00:20:14.184 Entry: 1 00:20:14.184 Error Count: 0x2 00:20:14.184 Submission Queue Id: 0x0 00:20:14.184 Command Id: 0x5 00:20:14.184 Phase Bit: 0 00:20:14.184 Status Code: 0x2 00:20:14.184 Status Code Type: 0x0 00:20:14.184 Do Not Retry: 1 00:20:14.184 Error Location: 0x28 00:20:14.184 LBA: 0x0 00:20:14.184 Namespace: 0x0 00:20:14.184 Vendor Log Page: 0x0 00:20:14.184 ----------- 00:20:14.184 Entry: 2 00:20:14.184 Error Count: 0x1 00:20:14.184 Submission Queue Id: 0x0 00:20:14.184 Command Id: 0x4 00:20:14.184 Phase Bit: 0 00:20:14.184 Status Code: 0x2 00:20:14.184 Status Code Type: 0x0 00:20:14.184 Do Not Retry: 1 00:20:14.184 Error Location: 0x28 00:20:14.184 LBA: 0x0 00:20:14.184 Namespace: 0x0 00:20:14.184 Vendor Log Page: 0x0 00:20:14.184 00:20:14.184 Number of Queues 00:20:14.184 ================ 00:20:14.184 Number of I/O Submission Queues: 128 00:20:14.184 Number of I/O Completion Queues: 128 00:20:14.184 00:20:14.184 ZNS Specific Controller Data 00:20:14.184 ============================ 00:20:14.184 Zone Append Size Limit: 0 00:20:14.184 00:20:14.184 00:20:14.184 Active Namespaces 00:20:14.184 ================= 00:20:14.184 get_feature(0x05) failed 00:20:14.184 Namespace ID:1 00:20:14.184 Command Set Identifier: NVM (00h) 00:20:14.184 Deallocate: Supported 00:20:14.184 Deallocated/Unwritten Error: Not Supported 00:20:14.184 Deallocated Read Value: Unknown 00:20:14.184 Deallocate in Write Zeroes: Not Supported 00:20:14.184 Deallocated Guard Field: 0xFFFF 00:20:14.184 Flush: Supported 00:20:14.184 Reservation: Not Supported 00:20:14.184 Namespace Sharing Capabilities: Multiple Controllers 00:20:14.184 Size (in LBAs): 1310720 (5GiB) 00:20:14.184 Capacity (in LBAs): 1310720 (5GiB) 00:20:14.184 Utilization (in LBAs): 1310720 (5GiB) 00:20:14.184 UUID: f1ed669a-ff15-4ec9-b9af-e0b4945a0990 00:20:14.184 Thin Provisioning: Not Supported 00:20:14.184 Per-NS Atomic Units: Yes 00:20:14.184 Atomic Boundary Size (Normal): 0 00:20:14.184 Atomic Boundary Size (PFail): 0 00:20:14.184 Atomic Boundary Offset: 0 00:20:14.184 NGUID/EUI64 Never Reused: No 00:20:14.184 ANA group ID: 1 00:20:14.184 Namespace Write Protected: No 00:20:14.184 Number of LBA Formats: 1 00:20:14.184 Current LBA Format: LBA Format #00 00:20:14.184 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:14.184 00:20:14.184 14:39:22 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:14.184 14:39:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:14.184 14:39:22 -- nvmf/common.sh@117 -- # sync 00:20:14.184 14:39:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.184 14:39:22 -- nvmf/common.sh@120 -- # set +e 00:20:14.184 14:39:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.184 14:39:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.184 rmmod nvme_tcp 00:20:14.184 rmmod nvme_fabrics 00:20:14.184 14:39:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:14.184 14:39:22 -- nvmf/common.sh@124 -- # set -e 00:20:14.184 14:39:22 -- nvmf/common.sh@125 -- # return 0 00:20:14.184 14:39:22 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:14.184 14:39:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:14.184 14:39:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:14.184 14:39:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:14.184 14:39:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.184 14:39:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:14.184 14:39:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.184 14:39:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.184 14:39:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.442 14:39:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:14.442 14:39:22 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:14.442 14:39:22 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:14.442 14:39:22 -- nvmf/common.sh@675 -- # echo 0 00:20:14.442 14:39:22 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:14.442 14:39:22 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:14.442 14:39:22 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:14.442 14:39:22 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:14.442 14:39:22 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:14.442 14:39:22 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:14.443 14:39:22 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:15.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.268 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:15.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:15.268 ************************************ 00:20:15.268 END TEST nvmf_identify_kernel_target 00:20:15.268 ************************************ 00:20:15.268 00:20:15.268 real 0m2.765s 00:20:15.268 user 0m0.994s 00:20:15.268 sys 0m1.279s 00:20:15.268 14:39:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:15.268 14:39:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.268 14:39:23 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:15.268 14:39:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.268 14:39:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.268 14:39:23 -- common/autotest_common.sh@10 -- # set +x 00:20:15.268 ************************************ 00:20:15.268 START TEST nvmf_auth 00:20:15.268 ************************************ 00:20:15.268 14:39:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:15.527 * Looking for test storage... 00:20:15.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.527 14:39:23 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.527 14:39:23 -- nvmf/common.sh@7 -- # uname -s 00:20:15.527 14:39:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.527 14:39:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.527 14:39:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.527 14:39:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.527 14:39:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.527 14:39:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.527 14:39:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.527 14:39:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.527 14:39:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.527 14:39:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.527 14:39:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:20:15.527 14:39:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:20:15.527 14:39:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.527 14:39:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.527 14:39:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.527 14:39:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.527 14:39:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.527 14:39:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.527 14:39:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.527 14:39:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.528 14:39:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.528 14:39:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.528 14:39:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.528 14:39:23 -- paths/export.sh@5 -- # export PATH 00:20:15.528 14:39:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.528 14:39:23 -- nvmf/common.sh@47 -- # : 0 00:20:15.528 14:39:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.528 14:39:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.528 14:39:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.528 14:39:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.528 14:39:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.528 14:39:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.528 14:39:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.528 14:39:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.528 14:39:23 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:15.528 14:39:23 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:15.528 14:39:23 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:15.528 14:39:23 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:15.528 14:39:23 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:15.528 14:39:23 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:15.528 14:39:23 -- host/auth.sh@21 -- # keys=() 00:20:15.528 14:39:23 -- host/auth.sh@77 -- # nvmftestinit 00:20:15.528 14:39:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:15.528 14:39:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.528 14:39:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:15.528 14:39:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:15.528 14:39:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:15.528 14:39:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.528 14:39:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.528 14:39:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.528 14:39:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:15.528 14:39:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:15.528 14:39:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:15.528 14:39:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:15.528 14:39:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:15.528 14:39:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:15.528 14:39:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.528 14:39:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.528 14:39:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.528 14:39:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:15.528 14:39:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.528 14:39:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.528 14:39:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.528 14:39:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.528 14:39:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.528 14:39:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.528 14:39:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.528 14:39:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.528 14:39:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:15.528 14:39:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:15.528 Cannot find device "nvmf_tgt_br" 00:20:15.528 14:39:24 -- nvmf/common.sh@155 -- # true 00:20:15.528 14:39:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.528 Cannot find device "nvmf_tgt_br2" 00:20:15.528 14:39:24 -- nvmf/common.sh@156 -- # true 00:20:15.528 14:39:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:15.528 14:39:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:15.528 Cannot find device "nvmf_tgt_br" 00:20:15.528 14:39:24 -- nvmf/common.sh@158 -- # true 00:20:15.528 14:39:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:15.528 Cannot find device "nvmf_tgt_br2" 00:20:15.528 14:39:24 -- nvmf/common.sh@159 -- # true 00:20:15.528 14:39:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:15.528 14:39:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:15.528 14:39:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.806 14:39:24 -- nvmf/common.sh@162 -- # true 00:20:15.806 14:39:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.806 14:39:24 -- nvmf/common.sh@163 -- # true 00:20:15.806 14:39:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.806 14:39:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.806 14:39:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.806 14:39:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.806 14:39:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.806 14:39:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.806 14:39:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.806 14:39:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:15.806 14:39:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:15.806 14:39:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:15.806 14:39:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:15.806 14:39:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:15.806 14:39:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:15.806 14:39:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:15.806 14:39:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:15.806 14:39:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:15.806 14:39:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:15.806 14:39:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:15.806 14:39:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:15.806 14:39:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:15.806 14:39:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:15.806 14:39:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:15.806 14:39:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:15.806 14:39:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:15.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:15.806 00:20:15.806 --- 10.0.0.2 ping statistics --- 00:20:15.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.806 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:15.806 14:39:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:15.806 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:15.806 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:15.806 00:20:15.806 --- 10.0.0.3 ping statistics --- 00:20:15.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.806 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:15.806 14:39:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:15.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:20:15.806 00:20:15.806 --- 10.0.0.1 ping statistics --- 00:20:15.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.806 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:15.806 14:39:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.806 14:39:24 -- nvmf/common.sh@422 -- # return 0 00:20:15.806 14:39:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:15.806 14:39:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.806 14:39:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:15.806 14:39:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:15.806 14:39:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.806 14:39:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:15.806 14:39:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:15.806 14:39:24 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:20:15.806 14:39:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:15.806 14:39:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:15.806 14:39:24 -- common/autotest_common.sh@10 -- # set +x 00:20:15.806 14:39:24 -- nvmf/common.sh@470 -- # nvmfpid=74216 00:20:15.806 14:39:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:15.806 14:39:24 -- nvmf/common.sh@471 -- # waitforlisten 74216 00:20:15.806 14:39:24 -- common/autotest_common.sh@817 -- # '[' -z 74216 ']' 00:20:15.806 14:39:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.806 14:39:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.806 14:39:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.806 14:39:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.806 14:39:24 -- common/autotest_common.sh@10 -- # set +x 00:20:16.380 14:39:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:16.380 14:39:24 -- common/autotest_common.sh@850 -- # return 0 00:20:16.380 14:39:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:16.380 14:39:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:16.380 14:39:24 -- common/autotest_common.sh@10 -- # set +x 00:20:16.380 14:39:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.380 14:39:24 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:16.380 14:39:24 -- host/auth.sh@81 -- # gen_key null 32 00:20:16.380 14:39:24 -- host/auth.sh@53 -- # local digest len file key 00:20:16.380 14:39:24 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:16.380 14:39:24 -- host/auth.sh@54 -- # local -A digests 00:20:16.380 14:39:24 -- host/auth.sh@56 -- # digest=null 00:20:16.380 14:39:24 -- host/auth.sh@56 -- # len=32 00:20:16.380 14:39:24 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:16.380 14:39:24 -- host/auth.sh@57 -- # key=319d64fecd847f20d86a6873db986481 00:20:16.380 14:39:24 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:20:16.380 14:39:24 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.5qi 00:20:16.380 14:39:24 -- host/auth.sh@59 -- # format_dhchap_key 319d64fecd847f20d86a6873db986481 0 00:20:16.380 14:39:24 -- nvmf/common.sh@708 -- # format_key DHHC-1 319d64fecd847f20d86a6873db986481 0 00:20:16.380 14:39:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:16.380 14:39:24 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:16.380 14:39:24 -- nvmf/common.sh@693 -- # key=319d64fecd847f20d86a6873db986481 00:20:16.380 14:39:24 -- nvmf/common.sh@693 -- # digest=0 00:20:16.380 14:39:24 -- nvmf/common.sh@694 -- # python - 00:20:16.380 14:39:24 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.5qi 00:20:16.380 14:39:24 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.5qi 00:20:16.380 14:39:24 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.5qi 00:20:16.380 14:39:24 -- host/auth.sh@82 -- # gen_key null 48 00:20:16.380 14:39:24 -- host/auth.sh@53 -- # local digest len file key 00:20:16.380 14:39:24 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:16.380 14:39:24 -- host/auth.sh@54 -- # local -A digests 00:20:16.380 14:39:24 -- host/auth.sh@56 -- # digest=null 00:20:16.380 14:39:24 -- host/auth.sh@56 -- # len=48 00:20:16.380 14:39:24 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:16.380 14:39:24 -- host/auth.sh@57 -- # key=2898ae3f0e83a5d2bdc4029f61e8f532afaead1c9812ca65 00:20:16.380 14:39:24 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:20:16.380 14:39:24 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.5oH 00:20:16.380 14:39:24 -- host/auth.sh@59 -- # format_dhchap_key 2898ae3f0e83a5d2bdc4029f61e8f532afaead1c9812ca65 0 00:20:16.380 14:39:24 -- nvmf/common.sh@708 -- # format_key DHHC-1 2898ae3f0e83a5d2bdc4029f61e8f532afaead1c9812ca65 0 00:20:16.380 14:39:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:16.380 14:39:24 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:16.380 14:39:24 -- nvmf/common.sh@693 -- # key=2898ae3f0e83a5d2bdc4029f61e8f532afaead1c9812ca65 00:20:16.380 14:39:24 -- nvmf/common.sh@693 -- # digest=0 00:20:16.380 14:39:24 -- nvmf/common.sh@694 -- # python - 00:20:16.380 14:39:24 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.5oH 00:20:16.380 14:39:24 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.5oH 00:20:16.380 14:39:24 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.5oH 00:20:16.380 14:39:24 -- host/auth.sh@83 -- # gen_key sha256 32 00:20:16.380 14:39:24 -- host/auth.sh@53 -- # local digest len file key 00:20:16.380 14:39:24 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:16.381 14:39:24 -- host/auth.sh@54 -- # local -A digests 00:20:16.381 14:39:24 -- host/auth.sh@56 -- # digest=sha256 00:20:16.381 14:39:24 -- host/auth.sh@56 -- # len=32 00:20:16.381 14:39:24 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:16.381 14:39:24 -- host/auth.sh@57 -- # key=292198a33afff0f7163c274c52fae514 00:20:16.381 14:39:24 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:20:16.381 14:39:24 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.K5v 00:20:16.381 14:39:24 -- host/auth.sh@59 -- # format_dhchap_key 292198a33afff0f7163c274c52fae514 1 00:20:16.381 14:39:24 -- nvmf/common.sh@708 -- # format_key DHHC-1 292198a33afff0f7163c274c52fae514 1 00:20:16.381 14:39:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:16.381 14:39:24 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:16.381 14:39:24 -- nvmf/common.sh@693 -- # key=292198a33afff0f7163c274c52fae514 00:20:16.381 14:39:24 -- nvmf/common.sh@693 -- # digest=1 00:20:16.381 14:39:24 -- nvmf/common.sh@694 -- # python - 00:20:16.381 14:39:24 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.K5v 00:20:16.381 14:39:24 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.K5v 00:20:16.381 14:39:24 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.K5v 00:20:16.381 14:39:24 -- host/auth.sh@84 -- # gen_key sha384 48 00:20:16.381 14:39:24 -- host/auth.sh@53 -- # local digest len file key 00:20:16.381 14:39:24 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:16.381 14:39:24 -- host/auth.sh@54 -- # local -A digests 00:20:16.381 14:39:24 -- host/auth.sh@56 -- # digest=sha384 00:20:16.381 14:39:24 -- host/auth.sh@56 -- # len=48 00:20:16.381 14:39:24 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:16.381 14:39:24 -- host/auth.sh@57 -- # key=af74d6f49def799708c0e2e3aa3d084c841def1fd012024e 00:20:16.381 14:39:24 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:20:16.381 14:39:24 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.UPv 00:20:16.381 14:39:24 -- host/auth.sh@59 -- # format_dhchap_key af74d6f49def799708c0e2e3aa3d084c841def1fd012024e 2 00:20:16.381 14:39:24 -- nvmf/common.sh@708 -- # format_key DHHC-1 af74d6f49def799708c0e2e3aa3d084c841def1fd012024e 2 00:20:16.381 14:39:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:16.381 14:39:24 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:16.381 14:39:24 -- nvmf/common.sh@693 -- # key=af74d6f49def799708c0e2e3aa3d084c841def1fd012024e 00:20:16.381 14:39:24 -- nvmf/common.sh@693 -- # digest=2 00:20:16.381 14:39:24 -- nvmf/common.sh@694 -- # python - 00:20:16.639 14:39:25 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.UPv 00:20:16.639 14:39:25 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.UPv 00:20:16.639 14:39:25 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.UPv 00:20:16.639 14:39:25 -- host/auth.sh@85 -- # gen_key sha512 64 00:20:16.639 14:39:25 -- host/auth.sh@53 -- # local digest len file key 00:20:16.639 14:39:25 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:16.639 14:39:25 -- host/auth.sh@54 -- # local -A digests 00:20:16.639 14:39:25 -- host/auth.sh@56 -- # digest=sha512 00:20:16.639 14:39:25 -- host/auth.sh@56 -- # len=64 00:20:16.639 14:39:25 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:16.639 14:39:25 -- host/auth.sh@57 -- # key=d00a4ca0637318cb7b425ba1149f86d31e2d30d9bcc4708102d75f1242c9a5c7 00:20:16.639 14:39:25 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:20:16.639 14:39:25 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.ZR2 00:20:16.639 14:39:25 -- host/auth.sh@59 -- # format_dhchap_key d00a4ca0637318cb7b425ba1149f86d31e2d30d9bcc4708102d75f1242c9a5c7 3 00:20:16.639 14:39:25 -- nvmf/common.sh@708 -- # format_key DHHC-1 d00a4ca0637318cb7b425ba1149f86d31e2d30d9bcc4708102d75f1242c9a5c7 3 00:20:16.639 14:39:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:16.639 14:39:25 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:16.639 14:39:25 -- nvmf/common.sh@693 -- # key=d00a4ca0637318cb7b425ba1149f86d31e2d30d9bcc4708102d75f1242c9a5c7 00:20:16.639 14:39:25 -- nvmf/common.sh@693 -- # digest=3 00:20:16.639 14:39:25 -- nvmf/common.sh@694 -- # python - 00:20:16.639 14:39:25 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.ZR2 00:20:16.639 14:39:25 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.ZR2 00:20:16.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.639 14:39:25 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.ZR2 00:20:16.639 14:39:25 -- host/auth.sh@87 -- # waitforlisten 74216 00:20:16.639 14:39:25 -- common/autotest_common.sh@817 -- # '[' -z 74216 ']' 00:20:16.640 14:39:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.640 14:39:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.640 14:39:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.640 14:39:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.640 14:39:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.907 14:39:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:16.907 14:39:25 -- common/autotest_common.sh@850 -- # return 0 00:20:16.907 14:39:25 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:16.907 14:39:25 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5qi 00:20:16.907 14:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.907 14:39:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.907 14:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.907 14:39:25 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:16.907 14:39:25 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5oH 00:20:16.907 14:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.907 14:39:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.907 14:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.907 14:39:25 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:16.907 14:39:25 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.K5v 00:20:16.908 14:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.908 14:39:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.908 14:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.908 14:39:25 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:16.908 14:39:25 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.UPv 00:20:16.908 14:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.908 14:39:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.908 14:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.908 14:39:25 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:16.908 14:39:25 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZR2 00:20:16.908 14:39:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.908 14:39:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.908 14:39:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.908 14:39:25 -- host/auth.sh@92 -- # nvmet_auth_init 00:20:16.908 14:39:25 -- host/auth.sh@35 -- # get_main_ns_ip 00:20:16.908 14:39:25 -- nvmf/common.sh@717 -- # local ip 00:20:16.908 14:39:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.908 14:39:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.908 14:39:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.908 14:39:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.908 14:39:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:16.908 14:39:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.908 14:39:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:16.908 14:39:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:16.908 14:39:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:16.908 14:39:25 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:16.908 14:39:25 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:16.908 14:39:25 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:16.908 14:39:25 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:16.908 14:39:25 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:16.908 14:39:25 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:16.908 14:39:25 -- nvmf/common.sh@628 -- # local block nvme 00:20:16.908 14:39:25 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:16.908 14:39:25 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:16.908 14:39:25 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:16.908 14:39:25 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:17.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:17.182 Waiting for block devices as requested 00:20:17.440 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:17.440 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:18.006 14:39:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:18.006 14:39:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:18.006 14:39:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:18.006 14:39:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:18.006 14:39:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:18.006 14:39:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:18.006 14:39:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:18.006 14:39:26 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:18.006 14:39:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:18.006 No valid GPT data, bailing 00:20:18.006 14:39:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:18.006 14:39:26 -- scripts/common.sh@391 -- # pt= 00:20:18.006 14:39:26 -- scripts/common.sh@392 -- # return 1 00:20:18.006 14:39:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:18.006 14:39:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:18.007 14:39:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:18.007 14:39:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:20:18.007 14:39:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:18.007 14:39:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:18.007 14:39:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:18.007 14:39:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:20:18.007 14:39:26 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:18.007 14:39:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:18.265 No valid GPT data, bailing 00:20:18.265 14:39:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:18.265 14:39:26 -- scripts/common.sh@391 -- # pt= 00:20:18.265 14:39:26 -- scripts/common.sh@392 -- # return 1 00:20:18.265 14:39:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:20:18.265 14:39:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:18.265 14:39:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:18.265 14:39:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:20:18.265 14:39:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:18.265 14:39:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:18.265 14:39:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:18.265 14:39:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:20:18.265 14:39:26 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:18.265 14:39:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:18.265 No valid GPT data, bailing 00:20:18.265 14:39:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:18.265 14:39:26 -- scripts/common.sh@391 -- # pt= 00:20:18.265 14:39:26 -- scripts/common.sh@392 -- # return 1 00:20:18.265 14:39:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:20:18.265 14:39:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:18.265 14:39:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:18.265 14:39:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:20:18.265 14:39:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:18.265 14:39:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:18.265 14:39:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:18.265 14:39:26 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:20:18.265 14:39:26 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:18.265 14:39:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:18.265 No valid GPT data, bailing 00:20:18.265 14:39:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:18.265 14:39:26 -- scripts/common.sh@391 -- # pt= 00:20:18.265 14:39:26 -- scripts/common.sh@392 -- # return 1 00:20:18.265 14:39:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:20:18.265 14:39:26 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:20:18.265 14:39:26 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:18.265 14:39:26 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:18.265 14:39:26 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:18.265 14:39:26 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:18.265 14:39:26 -- nvmf/common.sh@656 -- # echo 1 00:20:18.265 14:39:26 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:20:18.265 14:39:26 -- nvmf/common.sh@658 -- # echo 1 00:20:18.265 14:39:26 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:20:18.265 14:39:26 -- nvmf/common.sh@661 -- # echo tcp 00:20:18.265 14:39:26 -- nvmf/common.sh@662 -- # echo 4420 00:20:18.265 14:39:26 -- nvmf/common.sh@663 -- # echo ipv4 00:20:18.265 14:39:26 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:18.265 14:39:26 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 --hostid=c475d660-18c3-4238-bb35-f293e0cc1403 -a 10.0.0.1 -t tcp -s 4420 00:20:18.265 00:20:18.265 Discovery Log Number of Records 2, Generation counter 2 00:20:18.265 =====Discovery Log Entry 0====== 00:20:18.265 trtype: tcp 00:20:18.265 adrfam: ipv4 00:20:18.265 subtype: current discovery subsystem 00:20:18.265 treq: not specified, sq flow control disable supported 00:20:18.265 portid: 1 00:20:18.265 trsvcid: 4420 00:20:18.265 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:18.265 traddr: 10.0.0.1 00:20:18.265 eflags: none 00:20:18.265 sectype: none 00:20:18.265 =====Discovery Log Entry 1====== 00:20:18.265 trtype: tcp 00:20:18.265 adrfam: ipv4 00:20:18.265 subtype: nvme subsystem 00:20:18.265 treq: not specified, sq flow control disable supported 00:20:18.265 portid: 1 00:20:18.265 trsvcid: 4420 00:20:18.265 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:18.265 traddr: 10.0.0.1 00:20:18.265 eflags: none 00:20:18.265 sectype: none 00:20:18.265 14:39:26 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:18.523 14:39:26 -- host/auth.sh@37 -- # echo 0 00:20:18.523 14:39:26 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:18.523 14:39:26 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:18.523 14:39:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:18.523 14:39:26 -- host/auth.sh@44 -- # digest=sha256 00:20:18.523 14:39:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.523 14:39:26 -- host/auth.sh@44 -- # keyid=1 00:20:18.523 14:39:26 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:18.523 14:39:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:18.523 14:39:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:18.523 14:39:26 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:18.523 14:39:26 -- host/auth.sh@100 -- # IFS=, 00:20:18.523 14:39:26 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:20:18.523 14:39:26 -- host/auth.sh@100 -- # IFS=, 00:20:18.523 14:39:26 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:18.523 14:39:26 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:18.523 14:39:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:18.523 14:39:26 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:20:18.523 14:39:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:18.523 14:39:26 -- host/auth.sh@68 -- # keyid=1 00:20:18.523 14:39:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:18.523 14:39:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.523 14:39:26 -- common/autotest_common.sh@10 -- # set +x 00:20:18.523 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.523 14:39:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:18.523 14:39:27 -- nvmf/common.sh@717 -- # local ip 00:20:18.523 14:39:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:18.523 14:39:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:18.523 14:39:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.523 14:39:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.523 14:39:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:18.523 14:39:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.523 14:39:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:18.523 14:39:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:18.523 14:39:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:18.523 14:39:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:18.523 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.523 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.523 nvme0n1 00:20:18.523 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.523 14:39:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.523 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.523 14:39:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:18.523 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.523 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.781 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.781 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.781 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:18.781 14:39:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.781 14:39:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:18.781 14:39:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:18.781 14:39:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:18.781 14:39:27 -- host/auth.sh@44 -- # digest=sha256 00:20:18.781 14:39:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.781 14:39:27 -- host/auth.sh@44 -- # keyid=0 00:20:18.781 14:39:27 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:18.781 14:39:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:18.781 14:39:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:18.781 14:39:27 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:18.781 14:39:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:20:18.781 14:39:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:18.781 14:39:27 -- host/auth.sh@68 -- # digest=sha256 00:20:18.781 14:39:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:18.781 14:39:27 -- host/auth.sh@68 -- # keyid=0 00:20:18.781 14:39:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:18.781 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.781 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.781 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:18.781 14:39:27 -- nvmf/common.sh@717 -- # local ip 00:20:18.781 14:39:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:18.781 14:39:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:18.781 14:39:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.781 14:39:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.781 14:39:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:18.781 14:39:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.781 14:39:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:18.781 14:39:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:18.781 14:39:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:18.781 14:39:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:18.781 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.781 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.781 nvme0n1 00:20:18.781 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.781 14:39:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:18.781 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.781 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.781 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.781 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.781 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.781 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:18.781 14:39:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:18.781 14:39:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:18.781 14:39:27 -- host/auth.sh@44 -- # digest=sha256 00:20:18.781 14:39:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.781 14:39:27 -- host/auth.sh@44 -- # keyid=1 00:20:18.781 14:39:27 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:18.781 14:39:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:18.781 14:39:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:18.781 14:39:27 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:18.781 14:39:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:20:18.781 14:39:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:18.781 14:39:27 -- host/auth.sh@68 -- # digest=sha256 00:20:18.781 14:39:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:18.781 14:39:27 -- host/auth.sh@68 -- # keyid=1 00:20:18.781 14:39:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:18.781 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.781 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.781 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.781 14:39:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:18.781 14:39:27 -- nvmf/common.sh@717 -- # local ip 00:20:18.781 14:39:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:18.781 14:39:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:18.781 14:39:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.781 14:39:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.781 14:39:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:18.781 14:39:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.781 14:39:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:18.781 14:39:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:18.781 14:39:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:18.781 14:39:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:18.781 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.781 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.039 nvme0n1 00:20:19.039 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.039 14:39:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.039 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.039 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.039 14:39:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.039 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.039 14:39:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.039 14:39:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.039 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.039 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.039 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.039 14:39:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.039 14:39:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:19.039 14:39:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.039 14:39:27 -- host/auth.sh@44 -- # digest=sha256 00:20:19.039 14:39:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:19.039 14:39:27 -- host/auth.sh@44 -- # keyid=2 00:20:19.039 14:39:27 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:19.039 14:39:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:19.039 14:39:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:19.039 14:39:27 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:19.039 14:39:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:20:19.039 14:39:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.039 14:39:27 -- host/auth.sh@68 -- # digest=sha256 00:20:19.039 14:39:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:19.039 14:39:27 -- host/auth.sh@68 -- # keyid=2 00:20:19.039 14:39:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.039 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.039 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.039 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.039 14:39:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.039 14:39:27 -- nvmf/common.sh@717 -- # local ip 00:20:19.039 14:39:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.039 14:39:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.039 14:39:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.040 14:39:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.040 14:39:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:19.040 14:39:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.040 14:39:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:19.040 14:39:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:19.040 14:39:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:19.040 14:39:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:19.040 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.040 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.040 nvme0n1 00:20:19.040 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.040 14:39:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.040 14:39:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.040 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.297 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.297 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.297 14:39:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.297 14:39:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.297 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.297 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.297 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.297 14:39:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.297 14:39:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:19.297 14:39:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.297 14:39:27 -- host/auth.sh@44 -- # digest=sha256 00:20:19.297 14:39:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:19.297 14:39:27 -- host/auth.sh@44 -- # keyid=3 00:20:19.297 14:39:27 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:19.297 14:39:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:19.297 14:39:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:19.297 14:39:27 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:19.297 14:39:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:20:19.297 14:39:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.297 14:39:27 -- host/auth.sh@68 -- # digest=sha256 00:20:19.297 14:39:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:19.297 14:39:27 -- host/auth.sh@68 -- # keyid=3 00:20:19.297 14:39:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.297 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.297 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.297 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.297 14:39:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.297 14:39:27 -- nvmf/common.sh@717 -- # local ip 00:20:19.297 14:39:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.297 14:39:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.297 14:39:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.297 14:39:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.297 14:39:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:19.297 14:39:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.297 14:39:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:19.297 14:39:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:19.297 14:39:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:19.297 14:39:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:19.297 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.297 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.297 nvme0n1 00:20:19.297 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.297 14:39:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.297 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.297 14:39:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.297 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.297 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.297 14:39:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.297 14:39:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.297 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.297 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.297 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.298 14:39:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.298 14:39:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:19.298 14:39:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.298 14:39:27 -- host/auth.sh@44 -- # digest=sha256 00:20:19.298 14:39:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:19.298 14:39:27 -- host/auth.sh@44 -- # keyid=4 00:20:19.298 14:39:27 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:19.298 14:39:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:19.298 14:39:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:19.298 14:39:27 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:19.556 14:39:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:20:19.556 14:39:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.556 14:39:27 -- host/auth.sh@68 -- # digest=sha256 00:20:19.556 14:39:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:19.556 14:39:27 -- host/auth.sh@68 -- # keyid=4 00:20:19.556 14:39:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.556 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.556 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.556 14:39:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.556 14:39:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.556 14:39:27 -- nvmf/common.sh@717 -- # local ip 00:20:19.556 14:39:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.556 14:39:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.556 14:39:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.556 14:39:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.556 14:39:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:19.556 14:39:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.556 14:39:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:19.556 14:39:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:19.556 14:39:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:19.556 14:39:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:19.556 14:39:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.556 14:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.556 nvme0n1 00:20:19.556 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.556 14:39:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.556 14:39:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.556 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.556 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.556 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.556 14:39:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.556 14:39:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.556 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.556 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.556 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.556 14:39:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.556 14:39:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.556 14:39:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:19.556 14:39:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.556 14:39:28 -- host/auth.sh@44 -- # digest=sha256 00:20:19.556 14:39:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:19.556 14:39:28 -- host/auth.sh@44 -- # keyid=0 00:20:19.556 14:39:28 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:19.556 14:39:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:19.556 14:39:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:19.814 14:39:28 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:19.814 14:39:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:20:19.814 14:39:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.814 14:39:28 -- host/auth.sh@68 -- # digest=sha256 00:20:19.814 14:39:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:19.814 14:39:28 -- host/auth.sh@68 -- # keyid=0 00:20:19.814 14:39:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.814 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.814 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.814 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.814 14:39:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.814 14:39:28 -- nvmf/common.sh@717 -- # local ip 00:20:19.814 14:39:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.814 14:39:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.814 14:39:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.814 14:39:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.814 14:39:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:19.814 14:39:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.814 14:39:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:19.814 14:39:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:19.814 14:39:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:19.815 14:39:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:19.815 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.815 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.073 nvme0n1 00:20:20.073 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.073 14:39:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.073 14:39:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.073 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.073 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.073 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.073 14:39:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.073 14:39:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.073 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.073 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.073 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.073 14:39:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.073 14:39:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:20.073 14:39:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.073 14:39:28 -- host/auth.sh@44 -- # digest=sha256 00:20:20.073 14:39:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:20.073 14:39:28 -- host/auth.sh@44 -- # keyid=1 00:20:20.073 14:39:28 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:20.073 14:39:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.073 14:39:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:20.073 14:39:28 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:20.073 14:39:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:20:20.073 14:39:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.073 14:39:28 -- host/auth.sh@68 -- # digest=sha256 00:20:20.073 14:39:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:20.073 14:39:28 -- host/auth.sh@68 -- # keyid=1 00:20:20.073 14:39:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.073 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.073 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.073 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.073 14:39:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.074 14:39:28 -- nvmf/common.sh@717 -- # local ip 00:20:20.074 14:39:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.074 14:39:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.074 14:39:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.074 14:39:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.074 14:39:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.074 14:39:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.074 14:39:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.074 14:39:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.074 14:39:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.074 14:39:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:20.074 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.074 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.074 nvme0n1 00:20:20.074 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.074 14:39:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.074 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.074 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.074 14:39:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.332 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.332 14:39:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.332 14:39:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.332 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.332 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.332 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.332 14:39:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.332 14:39:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:20.332 14:39:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.332 14:39:28 -- host/auth.sh@44 -- # digest=sha256 00:20:20.332 14:39:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:20.332 14:39:28 -- host/auth.sh@44 -- # keyid=2 00:20:20.332 14:39:28 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:20.332 14:39:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.332 14:39:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:20.332 14:39:28 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:20.332 14:39:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:20:20.332 14:39:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.332 14:39:28 -- host/auth.sh@68 -- # digest=sha256 00:20:20.332 14:39:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:20.332 14:39:28 -- host/auth.sh@68 -- # keyid=2 00:20:20.332 14:39:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.332 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.332 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.332 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.332 14:39:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.332 14:39:28 -- nvmf/common.sh@717 -- # local ip 00:20:20.332 14:39:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.332 14:39:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.332 14:39:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.332 14:39:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.332 14:39:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.332 14:39:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.332 14:39:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.332 14:39:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.332 14:39:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.332 14:39:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:20.332 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.332 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.332 nvme0n1 00:20:20.332 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.332 14:39:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.332 14:39:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.332 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.332 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.333 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.333 14:39:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.333 14:39:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.333 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.333 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.605 14:39:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.605 14:39:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:20.605 14:39:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.605 14:39:28 -- host/auth.sh@44 -- # digest=sha256 00:20:20.605 14:39:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:20.605 14:39:28 -- host/auth.sh@44 -- # keyid=3 00:20:20.605 14:39:28 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:20.605 14:39:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.605 14:39:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:20.605 14:39:28 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:20.605 14:39:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:20:20.605 14:39:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.605 14:39:28 -- host/auth.sh@68 -- # digest=sha256 00:20:20.605 14:39:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:20.605 14:39:28 -- host/auth.sh@68 -- # keyid=3 00:20:20.605 14:39:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.605 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.605 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 14:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.605 14:39:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.605 14:39:28 -- nvmf/common.sh@717 -- # local ip 00:20:20.605 14:39:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.605 14:39:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.605 14:39:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.605 14:39:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.605 14:39:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.605 14:39:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.605 14:39:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.605 14:39:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.605 14:39:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.605 14:39:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:20.605 14:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.605 14:39:28 -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 nvme0n1 00:20:20.605 14:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.605 14:39:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.605 14:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.605 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 14:39:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.605 14:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.605 14:39:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.605 14:39:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.605 14:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.605 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 14:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.605 14:39:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.605 14:39:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:20.605 14:39:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.605 14:39:29 -- host/auth.sh@44 -- # digest=sha256 00:20:20.605 14:39:29 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:20.605 14:39:29 -- host/auth.sh@44 -- # keyid=4 00:20:20.605 14:39:29 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:20.605 14:39:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.605 14:39:29 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:20.605 14:39:29 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:20.605 14:39:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:20:20.605 14:39:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.605 14:39:29 -- host/auth.sh@68 -- # digest=sha256 00:20:20.605 14:39:29 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:20.605 14:39:29 -- host/auth.sh@68 -- # keyid=4 00:20:20.605 14:39:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.605 14:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.605 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.605 14:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.605 14:39:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.605 14:39:29 -- nvmf/common.sh@717 -- # local ip 00:20:20.605 14:39:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.605 14:39:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.605 14:39:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.605 14:39:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.605 14:39:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.605 14:39:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.605 14:39:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.605 14:39:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.605 14:39:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.605 14:39:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:20.605 14:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.605 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.864 nvme0n1 00:20:20.864 14:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.864 14:39:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.864 14:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.864 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.864 14:39:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.864 14:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.864 14:39:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.864 14:39:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.864 14:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.864 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:20:20.864 14:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.864 14:39:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.864 14:39:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.864 14:39:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:20.864 14:39:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.864 14:39:29 -- host/auth.sh@44 -- # digest=sha256 00:20:20.864 14:39:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:20.864 14:39:29 -- host/auth.sh@44 -- # keyid=0 00:20:20.864 14:39:29 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:20.864 14:39:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.864 14:39:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:21.432 14:39:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:21.432 14:39:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:20:21.432 14:39:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.432 14:39:29 -- host/auth.sh@68 -- # digest=sha256 00:20:21.432 14:39:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:21.432 14:39:29 -- host/auth.sh@68 -- # keyid=0 00:20:21.432 14:39:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.432 14:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.432 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:20:21.432 14:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.432 14:39:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.432 14:39:29 -- nvmf/common.sh@717 -- # local ip 00:20:21.432 14:39:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.432 14:39:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.432 14:39:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.432 14:39:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.432 14:39:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.432 14:39:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.432 14:39:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.432 14:39:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.432 14:39:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.432 14:39:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:21.432 14:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.432 14:39:29 -- common/autotest_common.sh@10 -- # set +x 00:20:21.691 nvme0n1 00:20:21.691 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.691 14:39:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.691 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.691 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.691 14:39:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.691 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.691 14:39:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.691 14:39:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.691 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.691 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.691 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.691 14:39:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.691 14:39:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:21.691 14:39:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.691 14:39:30 -- host/auth.sh@44 -- # digest=sha256 00:20:21.691 14:39:30 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:21.691 14:39:30 -- host/auth.sh@44 -- # keyid=1 00:20:21.691 14:39:30 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:21.691 14:39:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:21.691 14:39:30 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:21.691 14:39:30 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:21.691 14:39:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:20:21.691 14:39:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.691 14:39:30 -- host/auth.sh@68 -- # digest=sha256 00:20:21.691 14:39:30 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:21.691 14:39:30 -- host/auth.sh@68 -- # keyid=1 00:20:21.691 14:39:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.691 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.691 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.691 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.691 14:39:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.691 14:39:30 -- nvmf/common.sh@717 -- # local ip 00:20:21.691 14:39:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.691 14:39:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.691 14:39:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.691 14:39:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.691 14:39:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.691 14:39:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.691 14:39:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.691 14:39:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.691 14:39:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.691 14:39:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:21.691 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.691 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.949 nvme0n1 00:20:21.949 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.949 14:39:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.949 14:39:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.949 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.949 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.949 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.949 14:39:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.949 14:39:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.949 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.949 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.949 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.950 14:39:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.950 14:39:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:21.950 14:39:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.950 14:39:30 -- host/auth.sh@44 -- # digest=sha256 00:20:21.950 14:39:30 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:21.950 14:39:30 -- host/auth.sh@44 -- # keyid=2 00:20:21.950 14:39:30 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:21.950 14:39:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:21.950 14:39:30 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:21.950 14:39:30 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:21.950 14:39:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:20:21.950 14:39:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.950 14:39:30 -- host/auth.sh@68 -- # digest=sha256 00:20:21.950 14:39:30 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:21.950 14:39:30 -- host/auth.sh@68 -- # keyid=2 00:20:21.950 14:39:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.950 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.950 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.950 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.950 14:39:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.950 14:39:30 -- nvmf/common.sh@717 -- # local ip 00:20:21.950 14:39:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.950 14:39:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.950 14:39:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.950 14:39:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.950 14:39:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.950 14:39:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.950 14:39:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.950 14:39:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.950 14:39:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.950 14:39:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:21.950 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.950 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.208 nvme0n1 00:20:22.208 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.208 14:39:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.208 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.208 14:39:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.208 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.208 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.208 14:39:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.208 14:39:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.208 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.208 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.208 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.208 14:39:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.208 14:39:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:22.208 14:39:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.208 14:39:30 -- host/auth.sh@44 -- # digest=sha256 00:20:22.208 14:39:30 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:22.208 14:39:30 -- host/auth.sh@44 -- # keyid=3 00:20:22.208 14:39:30 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:22.208 14:39:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:22.208 14:39:30 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:22.208 14:39:30 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:22.208 14:39:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:20:22.208 14:39:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:22.208 14:39:30 -- host/auth.sh@68 -- # digest=sha256 00:20:22.208 14:39:30 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:22.208 14:39:30 -- host/auth.sh@68 -- # keyid=3 00:20:22.208 14:39:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.208 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.208 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.208 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.208 14:39:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:22.208 14:39:30 -- nvmf/common.sh@717 -- # local ip 00:20:22.208 14:39:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:22.208 14:39:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:22.208 14:39:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.208 14:39:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.208 14:39:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:22.208 14:39:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.208 14:39:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:22.208 14:39:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:22.208 14:39:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:22.208 14:39:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:22.208 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.208 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.466 nvme0n1 00:20:22.466 14:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.466 14:39:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.466 14:39:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.466 14:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.466 14:39:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.466 14:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.466 14:39:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.466 14:39:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.466 14:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.466 14:39:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.466 14:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.466 14:39:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.466 14:39:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:22.466 14:39:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.466 14:39:31 -- host/auth.sh@44 -- # digest=sha256 00:20:22.466 14:39:31 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:22.466 14:39:31 -- host/auth.sh@44 -- # keyid=4 00:20:22.467 14:39:31 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:22.467 14:39:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:22.467 14:39:31 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:22.467 14:39:31 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:22.467 14:39:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:20:22.467 14:39:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:22.467 14:39:31 -- host/auth.sh@68 -- # digest=sha256 00:20:22.467 14:39:31 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:22.467 14:39:31 -- host/auth.sh@68 -- # keyid=4 00:20:22.467 14:39:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.467 14:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.467 14:39:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.726 14:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.726 14:39:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:22.726 14:39:31 -- nvmf/common.sh@717 -- # local ip 00:20:22.726 14:39:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:22.726 14:39:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:22.726 14:39:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.726 14:39:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.726 14:39:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:22.726 14:39:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.726 14:39:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:22.726 14:39:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:22.726 14:39:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:22.726 14:39:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:22.726 14:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.726 14:39:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.726 nvme0n1 00:20:22.726 14:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.726 14:39:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.726 14:39:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.726 14:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.726 14:39:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.726 14:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.984 14:39:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.984 14:39:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.984 14:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.984 14:39:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.984 14:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.984 14:39:31 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.984 14:39:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.984 14:39:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:22.984 14:39:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.984 14:39:31 -- host/auth.sh@44 -- # digest=sha256 00:20:22.984 14:39:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:22.984 14:39:31 -- host/auth.sh@44 -- # keyid=0 00:20:22.984 14:39:31 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:22.984 14:39:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:22.985 14:39:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:24.904 14:39:33 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:24.904 14:39:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:20:24.904 14:39:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.904 14:39:33 -- host/auth.sh@68 -- # digest=sha256 00:20:24.904 14:39:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:24.904 14:39:33 -- host/auth.sh@68 -- # keyid=0 00:20:24.904 14:39:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.904 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.904 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.904 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.904 14:39:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.904 14:39:33 -- nvmf/common.sh@717 -- # local ip 00:20:24.904 14:39:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.904 14:39:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.904 14:39:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.904 14:39:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.904 14:39:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:24.904 14:39:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.904 14:39:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:24.904 14:39:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:24.904 14:39:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:24.904 14:39:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:24.904 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.904 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.182 nvme0n1 00:20:25.182 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.182 14:39:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.182 14:39:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.182 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.182 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.182 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.182 14:39:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.182 14:39:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.182 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.182 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.182 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.182 14:39:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.182 14:39:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:25.182 14:39:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.182 14:39:33 -- host/auth.sh@44 -- # digest=sha256 00:20:25.182 14:39:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:25.182 14:39:33 -- host/auth.sh@44 -- # keyid=1 00:20:25.182 14:39:33 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:25.182 14:39:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:25.182 14:39:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:25.182 14:39:33 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:25.182 14:39:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:20:25.182 14:39:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.182 14:39:33 -- host/auth.sh@68 -- # digest=sha256 00:20:25.182 14:39:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:25.182 14:39:33 -- host/auth.sh@68 -- # keyid=1 00:20:25.182 14:39:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:25.182 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.182 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.182 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.182 14:39:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.182 14:39:33 -- nvmf/common.sh@717 -- # local ip 00:20:25.182 14:39:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.182 14:39:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.182 14:39:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.182 14:39:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.182 14:39:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.182 14:39:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.182 14:39:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.182 14:39:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.182 14:39:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.182 14:39:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:25.182 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.182 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.441 nvme0n1 00:20:25.441 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.441 14:39:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.441 14:39:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.441 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.441 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.441 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.441 14:39:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.441 14:39:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.441 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.441 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.441 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.441 14:39:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.441 14:39:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:25.441 14:39:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.441 14:39:34 -- host/auth.sh@44 -- # digest=sha256 00:20:25.441 14:39:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:25.441 14:39:34 -- host/auth.sh@44 -- # keyid=2 00:20:25.442 14:39:34 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:25.442 14:39:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:25.442 14:39:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:25.442 14:39:34 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:25.442 14:39:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:20:25.442 14:39:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.442 14:39:34 -- host/auth.sh@68 -- # digest=sha256 00:20:25.442 14:39:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:25.442 14:39:34 -- host/auth.sh@68 -- # keyid=2 00:20:25.442 14:39:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:25.442 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.442 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:25.442 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.442 14:39:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.442 14:39:34 -- nvmf/common.sh@717 -- # local ip 00:20:25.442 14:39:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.442 14:39:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.442 14:39:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.442 14:39:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.442 14:39:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.442 14:39:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.442 14:39:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.442 14:39:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.442 14:39:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.442 14:39:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:25.442 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.442 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.010 nvme0n1 00:20:26.010 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.010 14:39:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.010 14:39:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.010 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.010 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.010 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.010 14:39:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.010 14:39:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.010 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.010 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.010 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.010 14:39:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.010 14:39:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:26.010 14:39:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.010 14:39:34 -- host/auth.sh@44 -- # digest=sha256 00:20:26.010 14:39:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:26.010 14:39:34 -- host/auth.sh@44 -- # keyid=3 00:20:26.010 14:39:34 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:26.010 14:39:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:26.010 14:39:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:26.010 14:39:34 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:26.010 14:39:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:20:26.010 14:39:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.010 14:39:34 -- host/auth.sh@68 -- # digest=sha256 00:20:26.010 14:39:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:26.010 14:39:34 -- host/auth.sh@68 -- # keyid=3 00:20:26.010 14:39:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.010 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.010 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.010 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.010 14:39:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.010 14:39:34 -- nvmf/common.sh@717 -- # local ip 00:20:26.010 14:39:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.010 14:39:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.010 14:39:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.010 14:39:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.010 14:39:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:26.010 14:39:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.010 14:39:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:26.010 14:39:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:26.010 14:39:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:26.010 14:39:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:26.010 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.010 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.269 nvme0n1 00:20:26.269 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.269 14:39:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.269 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.269 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.269 14:39:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.269 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.269 14:39:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.269 14:39:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.269 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.269 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.528 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.528 14:39:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.528 14:39:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:26.528 14:39:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.528 14:39:34 -- host/auth.sh@44 -- # digest=sha256 00:20:26.528 14:39:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:26.528 14:39:34 -- host/auth.sh@44 -- # keyid=4 00:20:26.528 14:39:34 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:26.528 14:39:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:26.528 14:39:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:26.528 14:39:34 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:26.528 14:39:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:20:26.528 14:39:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.528 14:39:34 -- host/auth.sh@68 -- # digest=sha256 00:20:26.528 14:39:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:26.528 14:39:34 -- host/auth.sh@68 -- # keyid=4 00:20:26.528 14:39:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.528 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.528 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.528 14:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.528 14:39:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.528 14:39:34 -- nvmf/common.sh@717 -- # local ip 00:20:26.528 14:39:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.528 14:39:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.528 14:39:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.528 14:39:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.528 14:39:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:26.528 14:39:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.528 14:39:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:26.528 14:39:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:26.528 14:39:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:26.528 14:39:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.528 14:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.528 14:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:26.787 nvme0n1 00:20:26.787 14:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.787 14:39:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.787 14:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.787 14:39:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.787 14:39:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.787 14:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.788 14:39:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.788 14:39:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.788 14:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.788 14:39:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.788 14:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.788 14:39:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.788 14:39:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.788 14:39:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:26.788 14:39:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.788 14:39:35 -- host/auth.sh@44 -- # digest=sha256 00:20:26.788 14:39:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:26.788 14:39:35 -- host/auth.sh@44 -- # keyid=0 00:20:26.788 14:39:35 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:26.788 14:39:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:26.788 14:39:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:30.981 14:39:39 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:30.981 14:39:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:20:30.981 14:39:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.981 14:39:39 -- host/auth.sh@68 -- # digest=sha256 00:20:30.981 14:39:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:30.981 14:39:39 -- host/auth.sh@68 -- # keyid=0 00:20:30.981 14:39:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.981 14:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.981 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:20:30.981 14:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.981 14:39:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.981 14:39:39 -- nvmf/common.sh@717 -- # local ip 00:20:30.981 14:39:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.981 14:39:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.981 14:39:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.981 14:39:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.981 14:39:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.981 14:39:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.981 14:39:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.981 14:39:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.981 14:39:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.982 14:39:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:30.982 14:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.982 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:20:31.549 nvme0n1 00:20:31.549 14:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.549 14:39:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.549 14:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.549 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:20:31.549 14:39:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.549 14:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.549 14:39:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.549 14:39:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.549 14:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.549 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:20:31.549 14:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.549 14:39:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.549 14:39:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:31.549 14:39:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.549 14:39:39 -- host/auth.sh@44 -- # digest=sha256 00:20:31.549 14:39:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.549 14:39:39 -- host/auth.sh@44 -- # keyid=1 00:20:31.549 14:39:39 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:31.549 14:39:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:31.549 14:39:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:31.549 14:39:39 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:31.549 14:39:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:20:31.549 14:39:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.549 14:39:39 -- host/auth.sh@68 -- # digest=sha256 00:20:31.549 14:39:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:31.549 14:39:39 -- host/auth.sh@68 -- # keyid=1 00:20:31.549 14:39:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.549 14:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.549 14:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:31.549 14:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.549 14:39:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.549 14:39:40 -- nvmf/common.sh@717 -- # local ip 00:20:31.549 14:39:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.549 14:39:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.549 14:39:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.549 14:39:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.549 14:39:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.549 14:39:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.549 14:39:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.549 14:39:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.549 14:39:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.549 14:39:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:31.549 14:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.549 14:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:32.116 nvme0n1 00:20:32.116 14:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.116 14:39:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.116 14:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.116 14:39:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:32.116 14:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:32.116 14:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.116 14:39:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.116 14:39:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.116 14:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.116 14:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:32.116 14:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.116 14:39:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:32.116 14:39:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:32.116 14:39:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:32.116 14:39:40 -- host/auth.sh@44 -- # digest=sha256 00:20:32.116 14:39:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.116 14:39:40 -- host/auth.sh@44 -- # keyid=2 00:20:32.116 14:39:40 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:32.116 14:39:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:32.116 14:39:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:32.116 14:39:40 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:32.116 14:39:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:20:32.116 14:39:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:32.116 14:39:40 -- host/auth.sh@68 -- # digest=sha256 00:20:32.116 14:39:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:32.116 14:39:40 -- host/auth.sh@68 -- # keyid=2 00:20:32.116 14:39:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.116 14:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.116 14:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:32.116 14:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.116 14:39:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:32.116 14:39:40 -- nvmf/common.sh@717 -- # local ip 00:20:32.116 14:39:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:32.116 14:39:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:32.116 14:39:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.116 14:39:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.116 14:39:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:32.116 14:39:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.116 14:39:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:32.116 14:39:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:32.116 14:39:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:32.116 14:39:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:32.116 14:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.116 14:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:33.053 nvme0n1 00:20:33.053 14:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.053 14:39:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.053 14:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.053 14:39:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.053 14:39:41 -- common/autotest_common.sh@10 -- # set +x 00:20:33.053 14:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.053 14:39:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.053 14:39:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.053 14:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.053 14:39:41 -- common/autotest_common.sh@10 -- # set +x 00:20:33.053 14:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.053 14:39:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.053 14:39:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:33.053 14:39:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.053 14:39:41 -- host/auth.sh@44 -- # digest=sha256 00:20:33.053 14:39:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.053 14:39:41 -- host/auth.sh@44 -- # keyid=3 00:20:33.053 14:39:41 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:33.053 14:39:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:33.053 14:39:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:33.053 14:39:41 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:33.053 14:39:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:20:33.053 14:39:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.053 14:39:41 -- host/auth.sh@68 -- # digest=sha256 00:20:33.053 14:39:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:33.053 14:39:41 -- host/auth.sh@68 -- # keyid=3 00:20:33.053 14:39:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.053 14:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.053 14:39:41 -- common/autotest_common.sh@10 -- # set +x 00:20:33.053 14:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.053 14:39:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.053 14:39:41 -- nvmf/common.sh@717 -- # local ip 00:20:33.053 14:39:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.053 14:39:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.053 14:39:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.053 14:39:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.053 14:39:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:33.053 14:39:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.053 14:39:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:33.053 14:39:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:33.053 14:39:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:33.053 14:39:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:33.053 14:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.053 14:39:41 -- common/autotest_common.sh@10 -- # set +x 00:20:33.623 nvme0n1 00:20:33.623 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.623 14:39:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.623 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.623 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:33.623 14:39:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.623 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.623 14:39:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.623 14:39:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.623 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.623 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:33.623 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.623 14:39:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.623 14:39:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:33.623 14:39:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.623 14:39:42 -- host/auth.sh@44 -- # digest=sha256 00:20:33.623 14:39:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.623 14:39:42 -- host/auth.sh@44 -- # keyid=4 00:20:33.623 14:39:42 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:33.623 14:39:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:33.623 14:39:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:33.623 14:39:42 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:33.623 14:39:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:20:33.623 14:39:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.623 14:39:42 -- host/auth.sh@68 -- # digest=sha256 00:20:33.623 14:39:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:33.623 14:39:42 -- host/auth.sh@68 -- # keyid=4 00:20:33.623 14:39:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.623 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.623 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:33.623 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.623 14:39:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.623 14:39:42 -- nvmf/common.sh@717 -- # local ip 00:20:33.623 14:39:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.623 14:39:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.623 14:39:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.623 14:39:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.623 14:39:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:33.623 14:39:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.623 14:39:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:33.623 14:39:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:33.623 14:39:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:33.623 14:39:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:33.623 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.623 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.196 nvme0n1 00:20:34.196 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.196 14:39:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.196 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.196 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.196 14:39:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.196 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.196 14:39:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.196 14:39:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.196 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.196 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.196 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.196 14:39:42 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:34.196 14:39:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.196 14:39:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.196 14:39:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:34.196 14:39:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.196 14:39:42 -- host/auth.sh@44 -- # digest=sha384 00:20:34.196 14:39:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.196 14:39:42 -- host/auth.sh@44 -- # keyid=0 00:20:34.196 14:39:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:34.196 14:39:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:34.196 14:39:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:34.196 14:39:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:34.196 14:39:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:20:34.196 14:39:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.196 14:39:42 -- host/auth.sh@68 -- # digest=sha384 00:20:34.196 14:39:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:34.196 14:39:42 -- host/auth.sh@68 -- # keyid=0 00:20:34.196 14:39:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.196 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.196 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.196 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.196 14:39:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.196 14:39:42 -- nvmf/common.sh@717 -- # local ip 00:20:34.196 14:39:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.196 14:39:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.196 14:39:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.196 14:39:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.196 14:39:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:34.196 14:39:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.196 14:39:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:34.196 14:39:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:34.196 14:39:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:34.196 14:39:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:34.196 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.196 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 nvme0n1 00:20:34.455 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.455 14:39:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.455 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.455 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 14:39:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.455 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.455 14:39:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.455 14:39:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.455 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.455 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.455 14:39:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.455 14:39:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:34.455 14:39:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.455 14:39:42 -- host/auth.sh@44 -- # digest=sha384 00:20:34.455 14:39:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.455 14:39:42 -- host/auth.sh@44 -- # keyid=1 00:20:34.455 14:39:42 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:34.455 14:39:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:34.455 14:39:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:34.455 14:39:42 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:34.455 14:39:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:20:34.455 14:39:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.455 14:39:42 -- host/auth.sh@68 -- # digest=sha384 00:20:34.455 14:39:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:34.455 14:39:42 -- host/auth.sh@68 -- # keyid=1 00:20:34.455 14:39:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.455 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.455 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 14:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.455 14:39:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.455 14:39:42 -- nvmf/common.sh@717 -- # local ip 00:20:34.455 14:39:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.455 14:39:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.455 14:39:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.455 14:39:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.455 14:39:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:34.455 14:39:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.455 14:39:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:34.455 14:39:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:34.455 14:39:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:34.455 14:39:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:34.455 14:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.455 14:39:42 -- common/autotest_common.sh@10 -- # set +x 00:20:34.455 nvme0n1 00:20:34.455 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.455 14:39:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.455 14:39:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.455 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.455 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.715 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.715 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.715 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.715 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.715 14:39:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:34.715 14:39:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.715 14:39:43 -- host/auth.sh@44 -- # digest=sha384 00:20:34.715 14:39:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.715 14:39:43 -- host/auth.sh@44 -- # keyid=2 00:20:34.715 14:39:43 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:34.715 14:39:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:34.715 14:39:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:34.715 14:39:43 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:34.715 14:39:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:20:34.715 14:39:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.715 14:39:43 -- host/auth.sh@68 -- # digest=sha384 00:20:34.715 14:39:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:34.715 14:39:43 -- host/auth.sh@68 -- # keyid=2 00:20:34.715 14:39:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.715 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.715 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.715 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.715 14:39:43 -- nvmf/common.sh@717 -- # local ip 00:20:34.715 14:39:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.715 14:39:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.715 14:39:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.715 14:39:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.715 14:39:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:34.715 14:39:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.715 14:39:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:34.715 14:39:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:34.715 14:39:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:34.715 14:39:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:34.715 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.715 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.715 nvme0n1 00:20:34.715 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.715 14:39:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.715 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.715 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.715 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.715 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.715 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.715 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.715 14:39:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:34.715 14:39:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.715 14:39:43 -- host/auth.sh@44 -- # digest=sha384 00:20:34.715 14:39:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.715 14:39:43 -- host/auth.sh@44 -- # keyid=3 00:20:34.715 14:39:43 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:34.715 14:39:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:34.715 14:39:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:34.715 14:39:43 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:34.715 14:39:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:20:34.715 14:39:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.715 14:39:43 -- host/auth.sh@68 -- # digest=sha384 00:20:34.715 14:39:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:34.715 14:39:43 -- host/auth.sh@68 -- # keyid=3 00:20:34.715 14:39:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.715 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.715 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.715 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.715 14:39:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.715 14:39:43 -- nvmf/common.sh@717 -- # local ip 00:20:34.715 14:39:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.715 14:39:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.715 14:39:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.715 14:39:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.715 14:39:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:34.715 14:39:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.715 14:39:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:34.715 14:39:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:34.715 14:39:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:34.715 14:39:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:34.715 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.715 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.974 nvme0n1 00:20:34.974 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.974 14:39:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.974 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.974 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.974 14:39:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.974 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.974 14:39:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.974 14:39:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.974 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.974 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.974 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.974 14:39:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.974 14:39:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:34.974 14:39:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.974 14:39:43 -- host/auth.sh@44 -- # digest=sha384 00:20:34.974 14:39:43 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.974 14:39:43 -- host/auth.sh@44 -- # keyid=4 00:20:34.974 14:39:43 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:34.975 14:39:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:34.975 14:39:43 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:34.975 14:39:43 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:34.975 14:39:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:20:34.975 14:39:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.975 14:39:43 -- host/auth.sh@68 -- # digest=sha384 00:20:34.975 14:39:43 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:34.975 14:39:43 -- host/auth.sh@68 -- # keyid=4 00:20:34.975 14:39:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.975 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.975 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.975 14:39:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.975 14:39:43 -- nvmf/common.sh@717 -- # local ip 00:20:34.975 14:39:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.975 14:39:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.975 14:39:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.975 14:39:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.975 14:39:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:34.975 14:39:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.975 14:39:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:34.975 14:39:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:34.975 14:39:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:34.975 14:39:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.975 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.975 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.234 nvme0n1 00:20:35.234 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.234 14:39:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.234 14:39:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.234 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.234 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.234 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.234 14:39:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.234 14:39:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.234 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.234 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.234 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.234 14:39:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.234 14:39:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.234 14:39:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:35.234 14:39:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.234 14:39:43 -- host/auth.sh@44 -- # digest=sha384 00:20:35.234 14:39:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.234 14:39:43 -- host/auth.sh@44 -- # keyid=0 00:20:35.234 14:39:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:35.234 14:39:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:35.234 14:39:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:35.234 14:39:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:35.234 14:39:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:20:35.234 14:39:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.234 14:39:43 -- host/auth.sh@68 -- # digest=sha384 00:20:35.234 14:39:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:35.234 14:39:43 -- host/auth.sh@68 -- # keyid=0 00:20:35.234 14:39:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.234 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.234 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.234 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.234 14:39:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.234 14:39:43 -- nvmf/common.sh@717 -- # local ip 00:20:35.234 14:39:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.234 14:39:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.234 14:39:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.234 14:39:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.234 14:39:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.234 14:39:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.234 14:39:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.234 14:39:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.234 14:39:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.234 14:39:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:35.234 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.234 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.234 nvme0n1 00:20:35.234 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.234 14:39:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.234 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.234 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.234 14:39:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.234 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.494 14:39:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.494 14:39:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.494 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.494 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.494 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.494 14:39:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.494 14:39:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:35.494 14:39:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.494 14:39:43 -- host/auth.sh@44 -- # digest=sha384 00:20:35.494 14:39:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.494 14:39:43 -- host/auth.sh@44 -- # keyid=1 00:20:35.494 14:39:43 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:35.494 14:39:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:35.494 14:39:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:35.494 14:39:43 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:35.494 14:39:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:20:35.494 14:39:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.494 14:39:43 -- host/auth.sh@68 -- # digest=sha384 00:20:35.494 14:39:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:35.494 14:39:43 -- host/auth.sh@68 -- # keyid=1 00:20:35.494 14:39:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.494 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.494 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.494 14:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.494 14:39:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.494 14:39:43 -- nvmf/common.sh@717 -- # local ip 00:20:35.494 14:39:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.494 14:39:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.494 14:39:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.494 14:39:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.494 14:39:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.494 14:39:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.494 14:39:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.494 14:39:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.494 14:39:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.494 14:39:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:35.494 14:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.494 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.494 nvme0n1 00:20:35.494 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.494 14:39:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.494 14:39:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.494 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.494 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.494 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.494 14:39:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.494 14:39:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.494 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.494 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.494 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.494 14:39:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.494 14:39:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:35.494 14:39:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.494 14:39:44 -- host/auth.sh@44 -- # digest=sha384 00:20:35.494 14:39:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.494 14:39:44 -- host/auth.sh@44 -- # keyid=2 00:20:35.494 14:39:44 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:35.494 14:39:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:35.494 14:39:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:35.494 14:39:44 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:35.494 14:39:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:20:35.494 14:39:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.494 14:39:44 -- host/auth.sh@68 -- # digest=sha384 00:20:35.494 14:39:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:35.494 14:39:44 -- host/auth.sh@68 -- # keyid=2 00:20:35.494 14:39:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.494 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.494 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.494 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.494 14:39:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.494 14:39:44 -- nvmf/common.sh@717 -- # local ip 00:20:35.494 14:39:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.494 14:39:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.494 14:39:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.494 14:39:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.494 14:39:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.494 14:39:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.494 14:39:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.494 14:39:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.494 14:39:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.494 14:39:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:35.494 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.494 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.754 nvme0n1 00:20:35.754 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.754 14:39:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.754 14:39:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.754 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.754 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.754 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.754 14:39:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.754 14:39:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.754 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.754 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.754 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.754 14:39:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.754 14:39:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:35.754 14:39:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.754 14:39:44 -- host/auth.sh@44 -- # digest=sha384 00:20:35.754 14:39:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.754 14:39:44 -- host/auth.sh@44 -- # keyid=3 00:20:35.754 14:39:44 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:35.754 14:39:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:35.754 14:39:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:35.754 14:39:44 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:35.754 14:39:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:20:35.754 14:39:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.754 14:39:44 -- host/auth.sh@68 -- # digest=sha384 00:20:35.754 14:39:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:35.754 14:39:44 -- host/auth.sh@68 -- # keyid=3 00:20:35.754 14:39:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.754 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.754 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.754 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.754 14:39:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.754 14:39:44 -- nvmf/common.sh@717 -- # local ip 00:20:35.754 14:39:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.754 14:39:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.754 14:39:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.754 14:39:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.754 14:39:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.754 14:39:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.754 14:39:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.754 14:39:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.754 14:39:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.754 14:39:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:35.754 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.754 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.014 nvme0n1 00:20:36.014 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.014 14:39:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.014 14:39:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.014 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.014 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.014 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.014 14:39:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.014 14:39:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.014 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.014 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.014 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.014 14:39:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.014 14:39:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:36.014 14:39:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.014 14:39:44 -- host/auth.sh@44 -- # digest=sha384 00:20:36.014 14:39:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.014 14:39:44 -- host/auth.sh@44 -- # keyid=4 00:20:36.014 14:39:44 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:36.014 14:39:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:36.014 14:39:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:36.014 14:39:44 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:36.014 14:39:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:20:36.014 14:39:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.014 14:39:44 -- host/auth.sh@68 -- # digest=sha384 00:20:36.014 14:39:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:36.014 14:39:44 -- host/auth.sh@68 -- # keyid=4 00:20:36.014 14:39:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.014 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.014 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.014 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.014 14:39:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.014 14:39:44 -- nvmf/common.sh@717 -- # local ip 00:20:36.014 14:39:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.014 14:39:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.014 14:39:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.014 14:39:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.014 14:39:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.014 14:39:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.014 14:39:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.014 14:39:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.014 14:39:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.014 14:39:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.014 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.014 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.014 nvme0n1 00:20:36.014 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.014 14:39:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.014 14:39:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.014 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.014 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.273 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.273 14:39:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.273 14:39:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.273 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.273 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.273 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.273 14:39:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.273 14:39:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.273 14:39:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:36.273 14:39:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.273 14:39:44 -- host/auth.sh@44 -- # digest=sha384 00:20:36.273 14:39:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.273 14:39:44 -- host/auth.sh@44 -- # keyid=0 00:20:36.273 14:39:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:36.273 14:39:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:36.273 14:39:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:36.273 14:39:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:36.273 14:39:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:20:36.273 14:39:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.273 14:39:44 -- host/auth.sh@68 -- # digest=sha384 00:20:36.273 14:39:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:36.273 14:39:44 -- host/auth.sh@68 -- # keyid=0 00:20:36.273 14:39:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.273 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.273 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.273 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.273 14:39:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.273 14:39:44 -- nvmf/common.sh@717 -- # local ip 00:20:36.273 14:39:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.273 14:39:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.273 14:39:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.273 14:39:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.273 14:39:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.273 14:39:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.273 14:39:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.273 14:39:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.273 14:39:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.273 14:39:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:36.273 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.273 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.273 nvme0n1 00:20:36.273 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.273 14:39:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.273 14:39:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.273 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.273 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.532 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.532 14:39:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.532 14:39:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.532 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.532 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.533 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.533 14:39:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.533 14:39:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:36.533 14:39:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.533 14:39:44 -- host/auth.sh@44 -- # digest=sha384 00:20:36.533 14:39:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.533 14:39:44 -- host/auth.sh@44 -- # keyid=1 00:20:36.533 14:39:44 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:36.533 14:39:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:36.533 14:39:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:36.533 14:39:44 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:36.533 14:39:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:20:36.533 14:39:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.533 14:39:44 -- host/auth.sh@68 -- # digest=sha384 00:20:36.533 14:39:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:36.533 14:39:44 -- host/auth.sh@68 -- # keyid=1 00:20:36.533 14:39:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.533 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.533 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.533 14:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.533 14:39:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.533 14:39:44 -- nvmf/common.sh@717 -- # local ip 00:20:36.533 14:39:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.533 14:39:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.533 14:39:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.533 14:39:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.533 14:39:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.533 14:39:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.533 14:39:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.533 14:39:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.533 14:39:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.533 14:39:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:36.533 14:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.533 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:36.792 nvme0n1 00:20:36.792 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.792 14:39:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.792 14:39:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.792 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.792 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.792 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.792 14:39:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.792 14:39:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.792 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.792 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.792 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.792 14:39:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.792 14:39:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:36.792 14:39:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.792 14:39:45 -- host/auth.sh@44 -- # digest=sha384 00:20:36.792 14:39:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.792 14:39:45 -- host/auth.sh@44 -- # keyid=2 00:20:36.792 14:39:45 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:36.792 14:39:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:36.792 14:39:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:36.792 14:39:45 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:36.792 14:39:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:20:36.792 14:39:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.792 14:39:45 -- host/auth.sh@68 -- # digest=sha384 00:20:36.792 14:39:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:36.792 14:39:45 -- host/auth.sh@68 -- # keyid=2 00:20:36.792 14:39:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.792 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.792 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:36.792 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.792 14:39:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.792 14:39:45 -- nvmf/common.sh@717 -- # local ip 00:20:36.792 14:39:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.792 14:39:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.792 14:39:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.792 14:39:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.792 14:39:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.792 14:39:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.792 14:39:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.792 14:39:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.792 14:39:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.792 14:39:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:36.792 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.792 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.051 nvme0n1 00:20:37.051 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.051 14:39:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.051 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.051 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.051 14:39:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.051 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.051 14:39:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.051 14:39:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.051 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.051 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.051 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.051 14:39:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.051 14:39:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:37.051 14:39:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.051 14:39:45 -- host/auth.sh@44 -- # digest=sha384 00:20:37.051 14:39:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.051 14:39:45 -- host/auth.sh@44 -- # keyid=3 00:20:37.051 14:39:45 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:37.051 14:39:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:37.051 14:39:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:37.051 14:39:45 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:37.051 14:39:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:20:37.051 14:39:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.051 14:39:45 -- host/auth.sh@68 -- # digest=sha384 00:20:37.051 14:39:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:37.051 14:39:45 -- host/auth.sh@68 -- # keyid=3 00:20:37.051 14:39:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.051 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.051 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.051 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.051 14:39:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.051 14:39:45 -- nvmf/common.sh@717 -- # local ip 00:20:37.051 14:39:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.051 14:39:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.051 14:39:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.051 14:39:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.051 14:39:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.051 14:39:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.052 14:39:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.052 14:39:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.052 14:39:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.052 14:39:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:37.052 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.052 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.311 nvme0n1 00:20:37.311 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.311 14:39:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.311 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.311 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.311 14:39:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.311 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.311 14:39:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.311 14:39:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.311 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.311 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.311 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.311 14:39:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.311 14:39:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:37.311 14:39:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.311 14:39:45 -- host/auth.sh@44 -- # digest=sha384 00:20:37.311 14:39:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.311 14:39:45 -- host/auth.sh@44 -- # keyid=4 00:20:37.311 14:39:45 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:37.311 14:39:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:37.311 14:39:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:37.311 14:39:45 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:37.311 14:39:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:20:37.311 14:39:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.311 14:39:45 -- host/auth.sh@68 -- # digest=sha384 00:20:37.311 14:39:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:37.311 14:39:45 -- host/auth.sh@68 -- # keyid=4 00:20:37.311 14:39:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.311 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.311 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.311 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.311 14:39:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.311 14:39:45 -- nvmf/common.sh@717 -- # local ip 00:20:37.311 14:39:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.311 14:39:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.311 14:39:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.311 14:39:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.311 14:39:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.311 14:39:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.311 14:39:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.311 14:39:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.311 14:39:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.311 14:39:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:37.311 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.311 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.570 nvme0n1 00:20:37.570 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.570 14:39:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.570 14:39:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.570 14:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.570 14:39:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.570 14:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.570 14:39:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.570 14:39:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.570 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.570 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.570 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.570 14:39:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.570 14:39:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.570 14:39:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:37.570 14:39:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.570 14:39:46 -- host/auth.sh@44 -- # digest=sha384 00:20:37.570 14:39:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:37.570 14:39:46 -- host/auth.sh@44 -- # keyid=0 00:20:37.570 14:39:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:37.570 14:39:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:37.570 14:39:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:37.570 14:39:46 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:37.570 14:39:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:20:37.570 14:39:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.570 14:39:46 -- host/auth.sh@68 -- # digest=sha384 00:20:37.570 14:39:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:37.570 14:39:46 -- host/auth.sh@68 -- # keyid=0 00:20:37.570 14:39:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.570 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.570 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.570 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.570 14:39:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.570 14:39:46 -- nvmf/common.sh@717 -- # local ip 00:20:37.570 14:39:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.570 14:39:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.570 14:39:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.570 14:39:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.570 14:39:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.570 14:39:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.570 14:39:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.570 14:39:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.570 14:39:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.570 14:39:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:37.570 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.570 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.828 nvme0n1 00:20:37.828 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.828 14:39:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.828 14:39:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.828 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.828 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.828 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.087 14:39:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.087 14:39:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.087 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.087 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:38.087 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.087 14:39:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:38.087 14:39:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:38.087 14:39:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:38.087 14:39:46 -- host/auth.sh@44 -- # digest=sha384 00:20:38.087 14:39:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.087 14:39:46 -- host/auth.sh@44 -- # keyid=1 00:20:38.087 14:39:46 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:38.087 14:39:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:38.087 14:39:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:38.087 14:39:46 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:38.087 14:39:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:20:38.087 14:39:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:38.087 14:39:46 -- host/auth.sh@68 -- # digest=sha384 00:20:38.087 14:39:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:38.087 14:39:46 -- host/auth.sh@68 -- # keyid=1 00:20:38.087 14:39:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.087 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.087 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:38.087 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.087 14:39:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:38.087 14:39:46 -- nvmf/common.sh@717 -- # local ip 00:20:38.087 14:39:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:38.087 14:39:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:38.087 14:39:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.087 14:39:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.087 14:39:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:38.087 14:39:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.087 14:39:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:38.087 14:39:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:38.087 14:39:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:38.087 14:39:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:38.087 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.087 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:38.346 nvme0n1 00:20:38.346 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.346 14:39:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.346 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.346 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:38.346 14:39:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:38.346 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.346 14:39:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.346 14:39:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.346 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.346 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:38.346 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.346 14:39:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:38.346 14:39:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:38.346 14:39:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:38.346 14:39:46 -- host/auth.sh@44 -- # digest=sha384 00:20:38.346 14:39:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.346 14:39:46 -- host/auth.sh@44 -- # keyid=2 00:20:38.346 14:39:46 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:38.346 14:39:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:38.346 14:39:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:38.346 14:39:46 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:38.346 14:39:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:20:38.346 14:39:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:38.346 14:39:46 -- host/auth.sh@68 -- # digest=sha384 00:20:38.346 14:39:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:38.346 14:39:46 -- host/auth.sh@68 -- # keyid=2 00:20:38.346 14:39:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.346 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.346 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:38.346 14:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.346 14:39:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:38.346 14:39:46 -- nvmf/common.sh@717 -- # local ip 00:20:38.346 14:39:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:38.346 14:39:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:38.346 14:39:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.346 14:39:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.346 14:39:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:38.346 14:39:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.346 14:39:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:38.346 14:39:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:38.346 14:39:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:38.346 14:39:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:38.346 14:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.346 14:39:46 -- common/autotest_common.sh@10 -- # set +x 00:20:38.914 nvme0n1 00:20:38.914 14:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.914 14:39:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:38.914 14:39:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.914 14:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.914 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.914 14:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.914 14:39:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.914 14:39:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.914 14:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.914 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.914 14:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.914 14:39:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:38.914 14:39:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:38.914 14:39:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:38.914 14:39:47 -- host/auth.sh@44 -- # digest=sha384 00:20:38.914 14:39:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.914 14:39:47 -- host/auth.sh@44 -- # keyid=3 00:20:38.914 14:39:47 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:38.914 14:39:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:38.914 14:39:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:38.914 14:39:47 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:38.914 14:39:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:20:38.914 14:39:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:38.914 14:39:47 -- host/auth.sh@68 -- # digest=sha384 00:20:38.914 14:39:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:38.914 14:39:47 -- host/auth.sh@68 -- # keyid=3 00:20:38.914 14:39:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.914 14:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.914 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.914 14:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.914 14:39:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:38.914 14:39:47 -- nvmf/common.sh@717 -- # local ip 00:20:38.914 14:39:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:38.914 14:39:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:38.914 14:39:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.914 14:39:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.914 14:39:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:38.914 14:39:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.914 14:39:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:38.914 14:39:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:38.914 14:39:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:38.914 14:39:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:38.914 14:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.914 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:39.173 nvme0n1 00:20:39.173 14:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.173 14:39:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.173 14:39:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:39.173 14:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.173 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:39.173 14:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.173 14:39:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.173 14:39:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.173 14:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.173 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:39.432 14:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.432 14:39:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:39.432 14:39:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:39.432 14:39:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:39.432 14:39:47 -- host/auth.sh@44 -- # digest=sha384 00:20:39.432 14:39:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.432 14:39:47 -- host/auth.sh@44 -- # keyid=4 00:20:39.432 14:39:47 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:39.432 14:39:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:39.432 14:39:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:39.432 14:39:47 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:39.432 14:39:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:20:39.432 14:39:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:39.432 14:39:47 -- host/auth.sh@68 -- # digest=sha384 00:20:39.432 14:39:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:39.432 14:39:47 -- host/auth.sh@68 -- # keyid=4 00:20:39.432 14:39:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.432 14:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.432 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:39.432 14:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.432 14:39:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:39.432 14:39:47 -- nvmf/common.sh@717 -- # local ip 00:20:39.432 14:39:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:39.432 14:39:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:39.432 14:39:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.432 14:39:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.432 14:39:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:39.432 14:39:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.432 14:39:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:39.432 14:39:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:39.432 14:39:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:39.432 14:39:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:39.432 14:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.432 14:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:39.690 nvme0n1 00:20:39.690 14:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.690 14:39:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.690 14:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.690 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.690 14:39:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:39.690 14:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.690 14:39:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.690 14:39:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.690 14:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.690 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.691 14:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.691 14:39:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.691 14:39:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:39.691 14:39:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:39.691 14:39:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:39.691 14:39:48 -- host/auth.sh@44 -- # digest=sha384 00:20:39.691 14:39:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:39.691 14:39:48 -- host/auth.sh@44 -- # keyid=0 00:20:39.691 14:39:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:39.691 14:39:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:39.691 14:39:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:39.691 14:39:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:39.691 14:39:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:20:39.691 14:39:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:39.691 14:39:48 -- host/auth.sh@68 -- # digest=sha384 00:20:39.691 14:39:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:39.691 14:39:48 -- host/auth.sh@68 -- # keyid=0 00:20:39.691 14:39:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.691 14:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.691 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.691 14:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.691 14:39:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:39.691 14:39:48 -- nvmf/common.sh@717 -- # local ip 00:20:39.691 14:39:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:39.691 14:39:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:39.691 14:39:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.691 14:39:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.691 14:39:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:39.691 14:39:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.691 14:39:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:39.691 14:39:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:39.691 14:39:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:39.691 14:39:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:39.691 14:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.691 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:40.258 nvme0n1 00:20:40.258 14:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.258 14:39:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.258 14:39:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:40.258 14:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.258 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:40.517 14:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.517 14:39:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.517 14:39:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.517 14:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.517 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:40.517 14:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.517 14:39:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:40.517 14:39:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:40.517 14:39:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:40.517 14:39:48 -- host/auth.sh@44 -- # digest=sha384 00:20:40.517 14:39:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.517 14:39:48 -- host/auth.sh@44 -- # keyid=1 00:20:40.517 14:39:48 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:40.517 14:39:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:40.517 14:39:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:40.517 14:39:48 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:40.517 14:39:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:20:40.517 14:39:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:40.517 14:39:48 -- host/auth.sh@68 -- # digest=sha384 00:20:40.517 14:39:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:40.517 14:39:48 -- host/auth.sh@68 -- # keyid=1 00:20:40.517 14:39:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.517 14:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.517 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:40.517 14:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.517 14:39:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:40.517 14:39:48 -- nvmf/common.sh@717 -- # local ip 00:20:40.517 14:39:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:40.517 14:39:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:40.518 14:39:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.518 14:39:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.518 14:39:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:40.518 14:39:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.518 14:39:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:40.518 14:39:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:40.518 14:39:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:40.518 14:39:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:40.518 14:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.518 14:39:48 -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 nvme0n1 00:20:41.085 14:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.085 14:39:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.085 14:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.085 14:39:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 14:39:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:41.085 14:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.085 14:39:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.085 14:39:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.085 14:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.085 14:39:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 14:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.085 14:39:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:41.085 14:39:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:41.085 14:39:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:41.085 14:39:49 -- host/auth.sh@44 -- # digest=sha384 00:20:41.085 14:39:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.085 14:39:49 -- host/auth.sh@44 -- # keyid=2 00:20:41.085 14:39:49 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:41.085 14:39:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:41.085 14:39:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:41.085 14:39:49 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:41.085 14:39:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:20:41.085 14:39:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:41.085 14:39:49 -- host/auth.sh@68 -- # digest=sha384 00:20:41.085 14:39:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:41.085 14:39:49 -- host/auth.sh@68 -- # keyid=2 00:20:41.085 14:39:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.085 14:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.085 14:39:49 -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 14:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.085 14:39:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:41.085 14:39:49 -- nvmf/common.sh@717 -- # local ip 00:20:41.085 14:39:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:41.085 14:39:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:41.085 14:39:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.085 14:39:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.085 14:39:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:41.085 14:39:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.085 14:39:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:41.085 14:39:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:41.085 14:39:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:41.085 14:39:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:41.085 14:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.085 14:39:49 -- common/autotest_common.sh@10 -- # set +x 00:20:42.019 nvme0n1 00:20:42.019 14:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.019 14:39:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.019 14:39:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:42.019 14:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.019 14:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:42.019 14:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.019 14:39:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.019 14:39:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.019 14:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.019 14:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:42.019 14:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.019 14:39:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:42.019 14:39:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:42.019 14:39:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:42.020 14:39:50 -- host/auth.sh@44 -- # digest=sha384 00:20:42.020 14:39:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.020 14:39:50 -- host/auth.sh@44 -- # keyid=3 00:20:42.020 14:39:50 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:42.020 14:39:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:42.020 14:39:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:42.020 14:39:50 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:42.020 14:39:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:20:42.020 14:39:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:42.020 14:39:50 -- host/auth.sh@68 -- # digest=sha384 00:20:42.020 14:39:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:42.020 14:39:50 -- host/auth.sh@68 -- # keyid=3 00:20:42.020 14:39:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.020 14:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.020 14:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:42.020 14:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.020 14:39:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:42.020 14:39:50 -- nvmf/common.sh@717 -- # local ip 00:20:42.020 14:39:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:42.020 14:39:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:42.020 14:39:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.020 14:39:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.020 14:39:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:42.020 14:39:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.020 14:39:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:42.020 14:39:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:42.020 14:39:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:42.020 14:39:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:42.020 14:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.020 14:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:42.588 nvme0n1 00:20:42.588 14:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.588 14:39:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.588 14:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.588 14:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:42.588 14:39:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:42.588 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.588 14:39:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.588 14:39:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.588 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.588 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.588 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.588 14:39:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:42.588 14:39:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:42.588 14:39:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:42.588 14:39:51 -- host/auth.sh@44 -- # digest=sha384 00:20:42.588 14:39:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.588 14:39:51 -- host/auth.sh@44 -- # keyid=4 00:20:42.588 14:39:51 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:42.588 14:39:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:42.588 14:39:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:42.588 14:39:51 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:42.588 14:39:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:20:42.588 14:39:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:42.588 14:39:51 -- host/auth.sh@68 -- # digest=sha384 00:20:42.588 14:39:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:42.588 14:39:51 -- host/auth.sh@68 -- # keyid=4 00:20:42.588 14:39:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.588 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.588 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:42.588 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.588 14:39:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:42.588 14:39:51 -- nvmf/common.sh@717 -- # local ip 00:20:42.588 14:39:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:42.588 14:39:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:42.588 14:39:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.588 14:39:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.588 14:39:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:42.588 14:39:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.588 14:39:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:42.588 14:39:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:42.588 14:39:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:42.588 14:39:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:42.588 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.588 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.158 nvme0n1 00:20:43.158 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.158 14:39:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.158 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.158 14:39:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.158 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.158 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.417 14:39:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.417 14:39:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.417 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.417 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.417 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.417 14:39:51 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:43.417 14:39:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.417 14:39:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:43.417 14:39:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:43.417 14:39:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.417 14:39:51 -- host/auth.sh@44 -- # digest=sha512 00:20:43.417 14:39:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.417 14:39:51 -- host/auth.sh@44 -- # keyid=0 00:20:43.417 14:39:51 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:43.417 14:39:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:43.417 14:39:51 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:43.417 14:39:51 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:43.417 14:39:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:20:43.417 14:39:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:43.417 14:39:51 -- host/auth.sh@68 -- # digest=sha512 00:20:43.417 14:39:51 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:43.417 14:39:51 -- host/auth.sh@68 -- # keyid=0 00:20:43.417 14:39:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.417 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.417 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.417 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.417 14:39:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:43.417 14:39:51 -- nvmf/common.sh@717 -- # local ip 00:20:43.417 14:39:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.417 14:39:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.417 14:39:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.417 14:39:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.417 14:39:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:43.417 14:39:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.417 14:39:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:43.417 14:39:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:43.417 14:39:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:43.417 14:39:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:43.417 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.417 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.417 nvme0n1 00:20:43.417 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.417 14:39:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.417 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.417 14:39:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.417 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.418 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.418 14:39:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.418 14:39:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.418 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.418 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.418 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.418 14:39:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:43.418 14:39:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:43.418 14:39:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.418 14:39:51 -- host/auth.sh@44 -- # digest=sha512 00:20:43.418 14:39:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.418 14:39:51 -- host/auth.sh@44 -- # keyid=1 00:20:43.418 14:39:51 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:43.418 14:39:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:43.418 14:39:51 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:43.418 14:39:51 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:43.418 14:39:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:20:43.418 14:39:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:43.418 14:39:51 -- host/auth.sh@68 -- # digest=sha512 00:20:43.418 14:39:51 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:43.418 14:39:51 -- host/auth.sh@68 -- # keyid=1 00:20:43.418 14:39:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.418 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.418 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.418 14:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.418 14:39:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:43.418 14:39:51 -- nvmf/common.sh@717 -- # local ip 00:20:43.418 14:39:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.418 14:39:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.418 14:39:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.418 14:39:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.418 14:39:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:43.418 14:39:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.418 14:39:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:43.418 14:39:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:43.418 14:39:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:43.418 14:39:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:43.418 14:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.418 14:39:51 -- common/autotest_common.sh@10 -- # set +x 00:20:43.677 nvme0n1 00:20:43.677 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.677 14:39:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.677 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.677 14:39:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.677 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.677 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.677 14:39:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.677 14:39:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.677 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.677 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.677 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.677 14:39:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:43.677 14:39:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:43.677 14:39:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.677 14:39:52 -- host/auth.sh@44 -- # digest=sha512 00:20:43.677 14:39:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.677 14:39:52 -- host/auth.sh@44 -- # keyid=2 00:20:43.677 14:39:52 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:43.677 14:39:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:43.677 14:39:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:43.677 14:39:52 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:43.677 14:39:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:20:43.677 14:39:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:43.677 14:39:52 -- host/auth.sh@68 -- # digest=sha512 00:20:43.677 14:39:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:43.677 14:39:52 -- host/auth.sh@68 -- # keyid=2 00:20:43.677 14:39:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.677 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.677 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.677 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.677 14:39:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:43.677 14:39:52 -- nvmf/common.sh@717 -- # local ip 00:20:43.677 14:39:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.677 14:39:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.677 14:39:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.677 14:39:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.677 14:39:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:43.677 14:39:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.677 14:39:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:43.677 14:39:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:43.677 14:39:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:43.677 14:39:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:43.677 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.677 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.677 nvme0n1 00:20:43.677 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.677 14:39:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.677 14:39:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.677 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.677 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.936 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.937 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.937 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.937 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:43.937 14:39:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:43.937 14:39:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.937 14:39:52 -- host/auth.sh@44 -- # digest=sha512 00:20:43.937 14:39:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.937 14:39:52 -- host/auth.sh@44 -- # keyid=3 00:20:43.937 14:39:52 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:43.937 14:39:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:43.937 14:39:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:43.937 14:39:52 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:43.937 14:39:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:20:43.937 14:39:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:43.937 14:39:52 -- host/auth.sh@68 -- # digest=sha512 00:20:43.937 14:39:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:43.937 14:39:52 -- host/auth.sh@68 -- # keyid=3 00:20:43.937 14:39:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.937 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.937 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.937 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:43.937 14:39:52 -- nvmf/common.sh@717 -- # local ip 00:20:43.937 14:39:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.937 14:39:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.937 14:39:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.937 14:39:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.937 14:39:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:43.937 14:39:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.937 14:39:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:43.937 14:39:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:43.937 14:39:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:43.937 14:39:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:43.937 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.937 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.937 nvme0n1 00:20:43.937 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.937 14:39:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.937 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.937 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.937 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.937 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.937 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.937 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:43.937 14:39:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:43.937 14:39:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.937 14:39:52 -- host/auth.sh@44 -- # digest=sha512 00:20:43.937 14:39:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.937 14:39:52 -- host/auth.sh@44 -- # keyid=4 00:20:43.937 14:39:52 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:43.937 14:39:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:43.937 14:39:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:43.937 14:39:52 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:43.937 14:39:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:20:43.937 14:39:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:43.937 14:39:52 -- host/auth.sh@68 -- # digest=sha512 00:20:43.937 14:39:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:43.937 14:39:52 -- host/auth.sh@68 -- # keyid=4 00:20:43.937 14:39:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.937 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.937 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:43.937 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.937 14:39:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:43.937 14:39:52 -- nvmf/common.sh@717 -- # local ip 00:20:43.937 14:39:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.937 14:39:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.937 14:39:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.937 14:39:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.196 14:39:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.197 14:39:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.197 14:39:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.197 14:39:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.197 14:39:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.197 14:39:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:44.197 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.197 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.197 nvme0n1 00:20:44.197 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.197 14:39:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.197 14:39:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.197 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.197 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.197 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.197 14:39:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.197 14:39:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.197 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.197 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.197 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.197 14:39:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.197 14:39:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.197 14:39:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:44.197 14:39:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.197 14:39:52 -- host/auth.sh@44 -- # digest=sha512 00:20:44.197 14:39:52 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.197 14:39:52 -- host/auth.sh@44 -- # keyid=0 00:20:44.197 14:39:52 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:44.197 14:39:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:44.197 14:39:52 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:44.197 14:39:52 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:44.197 14:39:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:20:44.197 14:39:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.197 14:39:52 -- host/auth.sh@68 -- # digest=sha512 00:20:44.197 14:39:52 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:44.197 14:39:52 -- host/auth.sh@68 -- # keyid=0 00:20:44.197 14:39:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.197 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.197 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.197 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.197 14:39:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.197 14:39:52 -- nvmf/common.sh@717 -- # local ip 00:20:44.197 14:39:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.197 14:39:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.197 14:39:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.197 14:39:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.197 14:39:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.197 14:39:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.197 14:39:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.197 14:39:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.197 14:39:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.197 14:39:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:44.197 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.197 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.472 nvme0n1 00:20:44.472 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.472 14:39:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.472 14:39:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.472 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.472 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.472 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.472 14:39:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.472 14:39:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.472 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.472 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.472 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.472 14:39:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.472 14:39:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:44.472 14:39:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.472 14:39:52 -- host/auth.sh@44 -- # digest=sha512 00:20:44.472 14:39:52 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.472 14:39:52 -- host/auth.sh@44 -- # keyid=1 00:20:44.472 14:39:52 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:44.472 14:39:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:44.472 14:39:52 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:44.472 14:39:52 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:44.472 14:39:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:20:44.472 14:39:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.472 14:39:52 -- host/auth.sh@68 -- # digest=sha512 00:20:44.472 14:39:52 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:44.472 14:39:52 -- host/auth.sh@68 -- # keyid=1 00:20:44.472 14:39:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.472 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.472 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.472 14:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.472 14:39:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.472 14:39:52 -- nvmf/common.sh@717 -- # local ip 00:20:44.472 14:39:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.472 14:39:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.472 14:39:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.472 14:39:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.472 14:39:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.472 14:39:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.472 14:39:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.472 14:39:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.472 14:39:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.472 14:39:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:44.472 14:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.472 14:39:52 -- common/autotest_common.sh@10 -- # set +x 00:20:44.472 nvme0n1 00:20:44.472 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.472 14:39:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.472 14:39:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.472 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.472 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.472 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.738 14:39:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.738 14:39:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.738 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.738 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.738 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.738 14:39:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.738 14:39:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:44.738 14:39:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.738 14:39:53 -- host/auth.sh@44 -- # digest=sha512 00:20:44.738 14:39:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.738 14:39:53 -- host/auth.sh@44 -- # keyid=2 00:20:44.738 14:39:53 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:44.738 14:39:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:44.738 14:39:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:44.738 14:39:53 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:44.738 14:39:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:20:44.739 14:39:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.739 14:39:53 -- host/auth.sh@68 -- # digest=sha512 00:20:44.739 14:39:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:44.739 14:39:53 -- host/auth.sh@68 -- # keyid=2 00:20:44.739 14:39:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.739 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.739 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.739 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.739 14:39:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.739 14:39:53 -- nvmf/common.sh@717 -- # local ip 00:20:44.739 14:39:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.739 14:39:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.739 14:39:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.739 14:39:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.739 14:39:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.739 14:39:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.739 14:39:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.739 14:39:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.739 14:39:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.739 14:39:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:44.739 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.739 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.739 nvme0n1 00:20:44.739 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.739 14:39:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.739 14:39:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.739 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.739 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.739 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.739 14:39:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.739 14:39:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.739 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.739 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.739 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.739 14:39:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.739 14:39:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:44.739 14:39:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.739 14:39:53 -- host/auth.sh@44 -- # digest=sha512 00:20:44.739 14:39:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.739 14:39:53 -- host/auth.sh@44 -- # keyid=3 00:20:44.739 14:39:53 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:44.739 14:39:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:44.739 14:39:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:44.739 14:39:53 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:44.739 14:39:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:20:44.739 14:39:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.739 14:39:53 -- host/auth.sh@68 -- # digest=sha512 00:20:44.739 14:39:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:44.739 14:39:53 -- host/auth.sh@68 -- # keyid=3 00:20:44.739 14:39:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.739 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.739 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.739 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.739 14:39:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.739 14:39:53 -- nvmf/common.sh@717 -- # local ip 00:20:44.739 14:39:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.739 14:39:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.739 14:39:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.739 14:39:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.739 14:39:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.739 14:39:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.739 14:39:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.739 14:39:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.739 14:39:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.739 14:39:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:44.739 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.739 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.997 nvme0n1 00:20:44.997 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.997 14:39:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.997 14:39:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.997 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.997 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.997 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.997 14:39:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.997 14:39:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.997 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.997 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.997 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.997 14:39:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.997 14:39:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:44.997 14:39:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.997 14:39:53 -- host/auth.sh@44 -- # digest=sha512 00:20:44.997 14:39:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.997 14:39:53 -- host/auth.sh@44 -- # keyid=4 00:20:44.997 14:39:53 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:44.997 14:39:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:44.997 14:39:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:44.997 14:39:53 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:44.997 14:39:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:20:44.997 14:39:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.997 14:39:53 -- host/auth.sh@68 -- # digest=sha512 00:20:44.997 14:39:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:44.997 14:39:53 -- host/auth.sh@68 -- # keyid=4 00:20:44.997 14:39:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.997 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.997 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.997 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.997 14:39:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.997 14:39:53 -- nvmf/common.sh@717 -- # local ip 00:20:44.997 14:39:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.997 14:39:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.998 14:39:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.998 14:39:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.998 14:39:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.998 14:39:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.998 14:39:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.998 14:39:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.998 14:39:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.998 14:39:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:44.998 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.998 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.257 nvme0n1 00:20:45.257 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.257 14:39:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.257 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.257 14:39:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.257 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.257 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.257 14:39:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.257 14:39:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.257 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.257 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.257 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.257 14:39:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.257 14:39:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:45.257 14:39:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:45.257 14:39:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.257 14:39:53 -- host/auth.sh@44 -- # digest=sha512 00:20:45.257 14:39:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.257 14:39:53 -- host/auth.sh@44 -- # keyid=0 00:20:45.257 14:39:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:45.257 14:39:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:45.257 14:39:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:45.257 14:39:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:45.257 14:39:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:20:45.257 14:39:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:45.257 14:39:53 -- host/auth.sh@68 -- # digest=sha512 00:20:45.257 14:39:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:45.257 14:39:53 -- host/auth.sh@68 -- # keyid=0 00:20:45.257 14:39:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.257 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.257 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.257 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.257 14:39:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:45.257 14:39:53 -- nvmf/common.sh@717 -- # local ip 00:20:45.257 14:39:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.257 14:39:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.257 14:39:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.257 14:39:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.257 14:39:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:45.257 14:39:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.257 14:39:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:45.257 14:39:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:45.257 14:39:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:45.257 14:39:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:45.257 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.257 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.516 nvme0n1 00:20:45.516 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.516 14:39:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.516 14:39:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.516 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.516 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.516 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.516 14:39:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.516 14:39:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.516 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.516 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.516 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.516 14:39:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:45.516 14:39:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:45.516 14:39:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.516 14:39:53 -- host/auth.sh@44 -- # digest=sha512 00:20:45.516 14:39:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.516 14:39:53 -- host/auth.sh@44 -- # keyid=1 00:20:45.516 14:39:53 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:45.516 14:39:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:45.516 14:39:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:45.516 14:39:53 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:45.516 14:39:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:20:45.516 14:39:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:45.516 14:39:53 -- host/auth.sh@68 -- # digest=sha512 00:20:45.516 14:39:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:45.516 14:39:53 -- host/auth.sh@68 -- # keyid=1 00:20:45.516 14:39:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.516 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.516 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.516 14:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.516 14:39:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:45.516 14:39:53 -- nvmf/common.sh@717 -- # local ip 00:20:45.516 14:39:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.516 14:39:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.516 14:39:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.516 14:39:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.516 14:39:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:45.516 14:39:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.516 14:39:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:45.516 14:39:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:45.516 14:39:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:45.516 14:39:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:45.516 14:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.516 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.776 nvme0n1 00:20:45.776 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.776 14:39:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.776 14:39:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.776 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.776 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.776 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.776 14:39:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.776 14:39:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.776 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.776 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.776 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.776 14:39:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:45.776 14:39:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:45.776 14:39:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.776 14:39:54 -- host/auth.sh@44 -- # digest=sha512 00:20:45.776 14:39:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.776 14:39:54 -- host/auth.sh@44 -- # keyid=2 00:20:45.776 14:39:54 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:45.776 14:39:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:45.776 14:39:54 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:45.776 14:39:54 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:45.776 14:39:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:20:45.776 14:39:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:45.776 14:39:54 -- host/auth.sh@68 -- # digest=sha512 00:20:45.776 14:39:54 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:45.776 14:39:54 -- host/auth.sh@68 -- # keyid=2 00:20:45.776 14:39:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.776 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.776 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:45.776 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.776 14:39:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:45.776 14:39:54 -- nvmf/common.sh@717 -- # local ip 00:20:45.776 14:39:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.776 14:39:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.776 14:39:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.776 14:39:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.776 14:39:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:45.776 14:39:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.776 14:39:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:45.776 14:39:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:45.776 14:39:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:45.776 14:39:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:45.776 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.776 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.035 nvme0n1 00:20:46.035 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.035 14:39:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:46.035 14:39:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.035 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.035 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.035 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.035 14:39:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.035 14:39:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.035 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.035 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.035 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.035 14:39:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:46.035 14:39:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:46.035 14:39:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:46.035 14:39:54 -- host/auth.sh@44 -- # digest=sha512 00:20:46.035 14:39:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.035 14:39:54 -- host/auth.sh@44 -- # keyid=3 00:20:46.035 14:39:54 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:46.035 14:39:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:46.035 14:39:54 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:46.035 14:39:54 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:46.035 14:39:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:20:46.035 14:39:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:46.035 14:39:54 -- host/auth.sh@68 -- # digest=sha512 00:20:46.035 14:39:54 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:46.035 14:39:54 -- host/auth.sh@68 -- # keyid=3 00:20:46.035 14:39:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.035 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.035 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.035 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.035 14:39:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:46.035 14:39:54 -- nvmf/common.sh@717 -- # local ip 00:20:46.035 14:39:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:46.035 14:39:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:46.035 14:39:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.035 14:39:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.035 14:39:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:46.035 14:39:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.035 14:39:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:46.035 14:39:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:46.035 14:39:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:46.035 14:39:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:46.035 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.035 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.293 nvme0n1 00:20:46.293 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.293 14:39:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.293 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.293 14:39:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:46.293 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.293 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.293 14:39:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.293 14:39:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.293 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.293 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.293 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.293 14:39:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:46.293 14:39:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:46.293 14:39:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:46.293 14:39:54 -- host/auth.sh@44 -- # digest=sha512 00:20:46.293 14:39:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.293 14:39:54 -- host/auth.sh@44 -- # keyid=4 00:20:46.293 14:39:54 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:46.293 14:39:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:46.293 14:39:54 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:46.293 14:39:54 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:46.293 14:39:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:20:46.293 14:39:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:46.293 14:39:54 -- host/auth.sh@68 -- # digest=sha512 00:20:46.293 14:39:54 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:46.293 14:39:54 -- host/auth.sh@68 -- # keyid=4 00:20:46.293 14:39:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.293 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.293 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.293 14:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.293 14:39:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:46.293 14:39:54 -- nvmf/common.sh@717 -- # local ip 00:20:46.293 14:39:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:46.293 14:39:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:46.293 14:39:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.293 14:39:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.293 14:39:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:46.293 14:39:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.293 14:39:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:46.293 14:39:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:46.293 14:39:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:46.293 14:39:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:46.293 14:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.293 14:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:46.552 nvme0n1 00:20:46.552 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.552 14:39:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.552 14:39:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:46.552 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.552 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:46.552 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.552 14:39:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.552 14:39:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.552 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.552 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:46.552 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.552 14:39:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.552 14:39:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:46.552 14:39:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:46.552 14:39:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:46.552 14:39:55 -- host/auth.sh@44 -- # digest=sha512 00:20:46.552 14:39:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.552 14:39:55 -- host/auth.sh@44 -- # keyid=0 00:20:46.552 14:39:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:46.552 14:39:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:46.552 14:39:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:46.552 14:39:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:46.552 14:39:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:20:46.552 14:39:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:46.552 14:39:55 -- host/auth.sh@68 -- # digest=sha512 00:20:46.552 14:39:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:46.552 14:39:55 -- host/auth.sh@68 -- # keyid=0 00:20:46.552 14:39:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:46.552 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.552 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:46.552 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.552 14:39:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:46.552 14:39:55 -- nvmf/common.sh@717 -- # local ip 00:20:46.552 14:39:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:46.552 14:39:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:46.552 14:39:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.552 14:39:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.552 14:39:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:46.552 14:39:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.552 14:39:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:46.552 14:39:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:46.552 14:39:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:46.552 14:39:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:46.552 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.552 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:47.119 nvme0n1 00:20:47.119 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.119 14:39:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.119 14:39:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:47.119 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.119 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:47.119 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.119 14:39:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.119 14:39:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.119 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.119 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:47.119 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.119 14:39:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:47.119 14:39:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:47.119 14:39:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:47.119 14:39:55 -- host/auth.sh@44 -- # digest=sha512 00:20:47.119 14:39:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.119 14:39:55 -- host/auth.sh@44 -- # keyid=1 00:20:47.119 14:39:55 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:47.119 14:39:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:47.119 14:39:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:47.119 14:39:55 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:47.119 14:39:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:20:47.119 14:39:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:47.119 14:39:55 -- host/auth.sh@68 -- # digest=sha512 00:20:47.119 14:39:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:47.119 14:39:55 -- host/auth.sh@68 -- # keyid=1 00:20:47.119 14:39:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.119 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.119 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:47.119 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.119 14:39:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:47.119 14:39:55 -- nvmf/common.sh@717 -- # local ip 00:20:47.119 14:39:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:47.119 14:39:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:47.119 14:39:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.120 14:39:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.120 14:39:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:47.120 14:39:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.120 14:39:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:47.120 14:39:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:47.120 14:39:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:47.120 14:39:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:47.120 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.120 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:47.378 nvme0n1 00:20:47.378 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.378 14:39:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.378 14:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.378 14:39:55 -- common/autotest_common.sh@10 -- # set +x 00:20:47.378 14:39:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:47.378 14:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.637 14:39:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.637 14:39:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.637 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.637 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:47.637 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.637 14:39:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:47.637 14:39:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:47.637 14:39:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:47.637 14:39:56 -- host/auth.sh@44 -- # digest=sha512 00:20:47.637 14:39:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.637 14:39:56 -- host/auth.sh@44 -- # keyid=2 00:20:47.637 14:39:56 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:47.637 14:39:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:47.637 14:39:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:47.637 14:39:56 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:47.637 14:39:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:20:47.637 14:39:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:47.637 14:39:56 -- host/auth.sh@68 -- # digest=sha512 00:20:47.637 14:39:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:47.637 14:39:56 -- host/auth.sh@68 -- # keyid=2 00:20:47.637 14:39:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.637 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.637 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:47.637 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.637 14:39:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:47.637 14:39:56 -- nvmf/common.sh@717 -- # local ip 00:20:47.637 14:39:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:47.637 14:39:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:47.637 14:39:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.637 14:39:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.637 14:39:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:47.637 14:39:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.637 14:39:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:47.637 14:39:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:47.637 14:39:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:47.637 14:39:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:47.637 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.637 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:47.895 nvme0n1 00:20:47.895 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.895 14:39:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.895 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.895 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:47.895 14:39:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:47.895 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.895 14:39:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.896 14:39:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.896 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.896 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:47.896 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.896 14:39:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:47.896 14:39:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:47.896 14:39:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:47.896 14:39:56 -- host/auth.sh@44 -- # digest=sha512 00:20:47.896 14:39:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.896 14:39:56 -- host/auth.sh@44 -- # keyid=3 00:20:47.896 14:39:56 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:47.896 14:39:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:47.896 14:39:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:47.896 14:39:56 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:47.896 14:39:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:20:47.896 14:39:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:47.896 14:39:56 -- host/auth.sh@68 -- # digest=sha512 00:20:47.896 14:39:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:47.896 14:39:56 -- host/auth.sh@68 -- # keyid=3 00:20:47.896 14:39:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.896 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.896 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:47.896 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.896 14:39:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:47.896 14:39:56 -- nvmf/common.sh@717 -- # local ip 00:20:47.896 14:39:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:47.896 14:39:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:47.896 14:39:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.896 14:39:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.896 14:39:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:47.896 14:39:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.896 14:39:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:47.896 14:39:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:47.896 14:39:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:47.896 14:39:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:47.896 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.896 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:48.463 nvme0n1 00:20:48.463 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.463 14:39:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.463 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.464 14:39:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:48.464 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:48.464 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.464 14:39:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.464 14:39:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.464 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.464 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:48.464 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.464 14:39:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:48.464 14:39:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:48.464 14:39:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:48.464 14:39:56 -- host/auth.sh@44 -- # digest=sha512 00:20:48.464 14:39:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.464 14:39:56 -- host/auth.sh@44 -- # keyid=4 00:20:48.464 14:39:56 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:48.464 14:39:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:48.464 14:39:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:48.464 14:39:56 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:48.464 14:39:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:20:48.464 14:39:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:48.464 14:39:56 -- host/auth.sh@68 -- # digest=sha512 00:20:48.464 14:39:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:48.464 14:39:56 -- host/auth.sh@68 -- # keyid=4 00:20:48.464 14:39:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.464 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.464 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:48.464 14:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.464 14:39:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:48.464 14:39:56 -- nvmf/common.sh@717 -- # local ip 00:20:48.464 14:39:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:48.464 14:39:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:48.464 14:39:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.464 14:39:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.464 14:39:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:48.464 14:39:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.464 14:39:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:48.464 14:39:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:48.464 14:39:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:48.464 14:39:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:48.464 14:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.464 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:48.722 nvme0n1 00:20:48.722 14:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.722 14:39:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:48.722 14:39:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.722 14:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.722 14:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:48.722 14:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.722 14:39:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.722 14:39:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.722 14:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.722 14:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:48.981 14:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.981 14:39:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.981 14:39:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:48.981 14:39:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:48.981 14:39:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:48.981 14:39:57 -- host/auth.sh@44 -- # digest=sha512 00:20:48.981 14:39:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:48.981 14:39:57 -- host/auth.sh@44 -- # keyid=0 00:20:48.981 14:39:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:48.981 14:39:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:48.981 14:39:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:48.981 14:39:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MzE5ZDY0ZmVjZDg0N2YyMGQ4NmE2ODczZGI5ODY0ODHoQShb: 00:20:48.981 14:39:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:20:48.981 14:39:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:48.981 14:39:57 -- host/auth.sh@68 -- # digest=sha512 00:20:48.981 14:39:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:48.981 14:39:57 -- host/auth.sh@68 -- # keyid=0 00:20:48.981 14:39:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.981 14:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.981 14:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:48.981 14:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.981 14:39:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:48.981 14:39:57 -- nvmf/common.sh@717 -- # local ip 00:20:48.981 14:39:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:48.981 14:39:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:48.981 14:39:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.981 14:39:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.981 14:39:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:48.981 14:39:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.981 14:39:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:48.981 14:39:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:48.981 14:39:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:48.981 14:39:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:48.981 14:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.981 14:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.548 nvme0n1 00:20:49.548 14:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.548 14:39:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.548 14:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.548 14:39:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:49.548 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:49.548 14:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.548 14:39:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.548 14:39:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.548 14:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.548 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:49.548 14:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.548 14:39:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:49.548 14:39:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:49.548 14:39:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:49.548 14:39:58 -- host/auth.sh@44 -- # digest=sha512 00:20:49.548 14:39:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.548 14:39:58 -- host/auth.sh@44 -- # keyid=1 00:20:49.548 14:39:58 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:49.548 14:39:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:49.548 14:39:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:49.548 14:39:58 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:49.548 14:39:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:20:49.548 14:39:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:49.548 14:39:58 -- host/auth.sh@68 -- # digest=sha512 00:20:49.548 14:39:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:49.548 14:39:58 -- host/auth.sh@68 -- # keyid=1 00:20:49.548 14:39:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.548 14:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.548 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:49.548 14:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.548 14:39:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:49.548 14:39:58 -- nvmf/common.sh@717 -- # local ip 00:20:49.548 14:39:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:49.548 14:39:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:49.548 14:39:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.548 14:39:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.548 14:39:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:49.548 14:39:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.548 14:39:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:49.548 14:39:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:49.548 14:39:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:49.548 14:39:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:49.548 14:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.548 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:50.484 nvme0n1 00:20:50.484 14:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.484 14:39:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.484 14:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.484 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:50.484 14:39:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:50.484 14:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.484 14:39:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.484 14:39:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.484 14:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.484 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:50.484 14:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.484 14:39:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:50.484 14:39:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:50.484 14:39:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:50.484 14:39:58 -- host/auth.sh@44 -- # digest=sha512 00:20:50.484 14:39:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.484 14:39:58 -- host/auth.sh@44 -- # keyid=2 00:20:50.484 14:39:58 -- host/auth.sh@45 -- # key=DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:50.484 14:39:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:50.484 14:39:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:50.484 14:39:58 -- host/auth.sh@49 -- # echo DHHC-1:01:MjkyMTk4YTMzYWZmZjBmNzE2M2MyNzRjNTJmYWU1MTRouYPb: 00:20:50.484 14:39:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:20:50.484 14:39:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:50.484 14:39:58 -- host/auth.sh@68 -- # digest=sha512 00:20:50.484 14:39:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:50.484 14:39:58 -- host/auth.sh@68 -- # keyid=2 00:20:50.484 14:39:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.484 14:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.484 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:50.484 14:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.484 14:39:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:50.484 14:39:58 -- nvmf/common.sh@717 -- # local ip 00:20:50.484 14:39:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:50.484 14:39:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:50.484 14:39:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.484 14:39:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.484 14:39:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:50.484 14:39:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.484 14:39:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:50.484 14:39:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:50.484 14:39:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:50.484 14:39:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:50.484 14:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.484 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:51.052 nvme0n1 00:20:51.052 14:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.052 14:39:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.052 14:39:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.052 14:39:59 -- common/autotest_common.sh@10 -- # set +x 00:20:51.052 14:39:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:51.052 14:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.052 14:39:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.052 14:39:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.052 14:39:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.052 14:39:59 -- common/autotest_common.sh@10 -- # set +x 00:20:51.052 14:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.052 14:39:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:51.052 14:39:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:51.052 14:39:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:51.052 14:39:59 -- host/auth.sh@44 -- # digest=sha512 00:20:51.052 14:39:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.052 14:39:59 -- host/auth.sh@44 -- # keyid=3 00:20:51.052 14:39:59 -- host/auth.sh@45 -- # key=DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:51.052 14:39:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:51.052 14:39:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:51.052 14:39:59 -- host/auth.sh@49 -- # echo DHHC-1:02:YWY3NGQ2ZjQ5ZGVmNzk5NzA4YzBlMmUzYWEzZDA4NGM4NDFkZWYxZmQwMTIwMjRlA0DrtA==: 00:20:51.052 14:39:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:20:51.052 14:39:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:51.052 14:39:59 -- host/auth.sh@68 -- # digest=sha512 00:20:51.052 14:39:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:51.052 14:39:59 -- host/auth.sh@68 -- # keyid=3 00:20:51.052 14:39:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:51.052 14:39:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.052 14:39:59 -- common/autotest_common.sh@10 -- # set +x 00:20:51.052 14:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.052 14:39:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:51.052 14:39:59 -- nvmf/common.sh@717 -- # local ip 00:20:51.052 14:39:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:51.052 14:39:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:51.052 14:39:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.052 14:39:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.052 14:39:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:51.052 14:39:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.052 14:39:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:51.052 14:39:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:51.052 14:39:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:51.052 14:39:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:51.052 14:39:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.052 14:39:59 -- common/autotest_common.sh@10 -- # set +x 00:20:51.618 nvme0n1 00:20:51.618 14:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.618 14:40:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.618 14:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.618 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:51.618 14:40:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:51.618 14:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.878 14:40:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.878 14:40:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.878 14:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.878 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:51.878 14:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.878 14:40:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:51.878 14:40:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:51.878 14:40:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:51.878 14:40:00 -- host/auth.sh@44 -- # digest=sha512 00:20:51.878 14:40:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.878 14:40:00 -- host/auth.sh@44 -- # keyid=4 00:20:51.878 14:40:00 -- host/auth.sh@45 -- # key=DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:51.878 14:40:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:51.878 14:40:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:51.878 14:40:00 -- host/auth.sh@49 -- # echo DHHC-1:03:ZDAwYTRjYTA2MzczMThjYjdiNDI1YmExMTQ5Zjg2ZDMxZTJkMzBkOWJjYzQ3MDgxMDJkNzVmMTI0MmM5YTVjNxgtXYI=: 00:20:51.878 14:40:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:20:51.878 14:40:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:51.878 14:40:00 -- host/auth.sh@68 -- # digest=sha512 00:20:51.878 14:40:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:51.878 14:40:00 -- host/auth.sh@68 -- # keyid=4 00:20:51.878 14:40:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:51.878 14:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.878 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:51.878 14:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.878 14:40:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:51.878 14:40:00 -- nvmf/common.sh@717 -- # local ip 00:20:51.878 14:40:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:51.878 14:40:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:51.878 14:40:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.878 14:40:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.878 14:40:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:51.878 14:40:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.878 14:40:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:51.878 14:40:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:51.878 14:40:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:51.878 14:40:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:51.878 14:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.878 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:52.456 nvme0n1 00:20:52.456 14:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.456 14:40:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.456 14:40:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:52.456 14:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.456 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:52.456 14:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.456 14:40:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.456 14:40:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.456 14:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.456 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:52.456 14:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.456 14:40:00 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:52.456 14:40:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:52.456 14:40:00 -- host/auth.sh@44 -- # digest=sha256 00:20:52.456 14:40:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.456 14:40:00 -- host/auth.sh@44 -- # keyid=1 00:20:52.456 14:40:00 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:52.456 14:40:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:52.456 14:40:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:52.456 14:40:00 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjg5OGFlM2YwZTgzYTVkMmJkYzQwMjlmNjFlOGY1MzJhZmFlYWQxYzk4MTJjYTY1bDJd3w==: 00:20:52.456 14:40:00 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.456 14:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.456 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:52.456 14:40:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.456 14:40:00 -- host/auth.sh@119 -- # get_main_ns_ip 00:20:52.456 14:40:00 -- nvmf/common.sh@717 -- # local ip 00:20:52.456 14:40:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:52.456 14:40:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:52.456 14:40:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.456 14:40:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.456 14:40:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:52.456 14:40:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.456 14:40:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:52.456 14:40:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:52.456 14:40:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:52.456 14:40:00 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.456 14:40:00 -- common/autotest_common.sh@638 -- # local es=0 00:20:52.456 14:40:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.456 14:40:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:52.456 14:40:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:52.456 14:40:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:52.456 14:40:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:52.456 14:40:00 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.456 14:40:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.456 14:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:52.456 request: 00:20:52.456 { 00:20:52.456 "name": "nvme0", 00:20:52.456 "trtype": "tcp", 00:20:52.456 "traddr": "10.0.0.1", 00:20:52.456 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:52.456 "adrfam": "ipv4", 00:20:52.456 "trsvcid": "4420", 00:20:52.456 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:52.456 "method": "bdev_nvme_attach_controller", 00:20:52.456 "req_id": 1 00:20:52.456 } 00:20:52.456 Got JSON-RPC error response 00:20:52.456 response: 00:20:52.456 { 00:20:52.456 "code": -32602, 00:20:52.456 "message": "Invalid parameters" 00:20:52.456 } 00:20:52.456 14:40:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:52.456 14:40:01 -- common/autotest_common.sh@641 -- # es=1 00:20:52.456 14:40:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:52.456 14:40:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:52.456 14:40:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:52.456 14:40:01 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.456 14:40:01 -- host/auth.sh@121 -- # jq length 00:20:52.456 14:40:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.456 14:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:52.456 14:40:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.456 14:40:01 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:20:52.456 14:40:01 -- host/auth.sh@124 -- # get_main_ns_ip 00:20:52.456 14:40:01 -- nvmf/common.sh@717 -- # local ip 00:20:52.456 14:40:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:52.456 14:40:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:52.456 14:40:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.456 14:40:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.716 14:40:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:52.716 14:40:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.716 14:40:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:52.716 14:40:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:52.716 14:40:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:52.716 14:40:01 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.716 14:40:01 -- common/autotest_common.sh@638 -- # local es=0 00:20:52.716 14:40:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.716 14:40:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:52.716 14:40:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:52.716 14:40:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:52.716 14:40:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:52.716 14:40:01 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.716 14:40:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.716 14:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:52.716 request: 00:20:52.716 { 00:20:52.716 "name": "nvme0", 00:20:52.716 "trtype": "tcp", 00:20:52.716 "traddr": "10.0.0.1", 00:20:52.716 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:52.716 "adrfam": "ipv4", 00:20:52.716 "trsvcid": "4420", 00:20:52.716 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:52.716 "dhchap_key": "key2", 00:20:52.716 "method": "bdev_nvme_attach_controller", 00:20:52.716 "req_id": 1 00:20:52.716 } 00:20:52.716 Got JSON-RPC error response 00:20:52.716 response: 00:20:52.716 { 00:20:52.716 "code": -32602, 00:20:52.716 "message": "Invalid parameters" 00:20:52.716 } 00:20:52.716 14:40:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:52.716 14:40:01 -- common/autotest_common.sh@641 -- # es=1 00:20:52.716 14:40:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:52.716 14:40:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:52.716 14:40:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:52.716 14:40:01 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.716 14:40:01 -- host/auth.sh@127 -- # jq length 00:20:52.716 14:40:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.716 14:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:52.716 14:40:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.716 14:40:01 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:20:52.716 14:40:01 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:52.716 14:40:01 -- host/auth.sh@130 -- # cleanup 00:20:52.716 14:40:01 -- host/auth.sh@24 -- # nvmftestfini 00:20:52.716 14:40:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:52.716 14:40:01 -- nvmf/common.sh@117 -- # sync 00:20:52.716 14:40:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.716 14:40:01 -- nvmf/common.sh@120 -- # set +e 00:20:52.716 14:40:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.716 14:40:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.716 rmmod nvme_tcp 00:20:52.716 rmmod nvme_fabrics 00:20:52.716 14:40:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.716 14:40:01 -- nvmf/common.sh@124 -- # set -e 00:20:52.716 14:40:01 -- nvmf/common.sh@125 -- # return 0 00:20:52.716 14:40:01 -- nvmf/common.sh@478 -- # '[' -n 74216 ']' 00:20:52.716 14:40:01 -- nvmf/common.sh@479 -- # killprocess 74216 00:20:52.716 14:40:01 -- common/autotest_common.sh@936 -- # '[' -z 74216 ']' 00:20:52.716 14:40:01 -- common/autotest_common.sh@940 -- # kill -0 74216 00:20:52.716 14:40:01 -- common/autotest_common.sh@941 -- # uname 00:20:52.716 14:40:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:52.716 14:40:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74216 00:20:52.716 killing process with pid 74216 00:20:52.716 14:40:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:52.716 14:40:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:52.716 14:40:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74216' 00:20:52.716 14:40:01 -- common/autotest_common.sh@955 -- # kill 74216 00:20:52.716 14:40:01 -- common/autotest_common.sh@960 -- # wait 74216 00:20:52.974 14:40:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:52.974 14:40:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:52.974 14:40:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:52.974 14:40:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.974 14:40:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.974 14:40:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.974 14:40:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.974 14:40:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.974 14:40:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:52.974 14:40:01 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:52.974 14:40:01 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:52.974 14:40:01 -- host/auth.sh@27 -- # clean_kernel_target 00:20:52.974 14:40:01 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:52.974 14:40:01 -- nvmf/common.sh@675 -- # echo 0 00:20:52.974 14:40:01 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:52.974 14:40:01 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:52.974 14:40:01 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:52.974 14:40:01 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:52.974 14:40:01 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:52.974 14:40:01 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:52.974 14:40:01 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:53.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.908 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.908 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.909 14:40:02 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5qi /tmp/spdk.key-null.5oH /tmp/spdk.key-sha256.K5v /tmp/spdk.key-sha384.UPv /tmp/spdk.key-sha512.ZR2 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:53.909 14:40:02 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:54.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.166 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:54.166 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:54.166 00:20:54.166 real 0m38.871s 00:20:54.166 user 0m34.789s 00:20:54.166 sys 0m3.447s 00:20:54.166 14:40:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:54.166 14:40:02 -- common/autotest_common.sh@10 -- # set +x 00:20:54.166 ************************************ 00:20:54.166 END TEST nvmf_auth 00:20:54.166 ************************************ 00:20:54.424 14:40:02 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:20:54.424 14:40:02 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:54.424 14:40:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:54.424 14:40:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:54.424 14:40:02 -- common/autotest_common.sh@10 -- # set +x 00:20:54.424 ************************************ 00:20:54.424 START TEST nvmf_digest 00:20:54.424 ************************************ 00:20:54.424 14:40:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:54.424 * Looking for test storage... 00:20:54.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:54.424 14:40:02 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.424 14:40:02 -- nvmf/common.sh@7 -- # uname -s 00:20:54.424 14:40:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.424 14:40:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.424 14:40:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.424 14:40:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.424 14:40:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.424 14:40:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.424 14:40:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.424 14:40:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.424 14:40:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.424 14:40:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.424 14:40:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:20:54.424 14:40:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:20:54.424 14:40:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.424 14:40:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.424 14:40:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:54.424 14:40:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.424 14:40:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.424 14:40:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.424 14:40:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.424 14:40:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.424 14:40:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.424 14:40:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.424 14:40:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.424 14:40:02 -- paths/export.sh@5 -- # export PATH 00:20:54.424 14:40:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.424 14:40:02 -- nvmf/common.sh@47 -- # : 0 00:20:54.424 14:40:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.424 14:40:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.424 14:40:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.424 14:40:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.424 14:40:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.424 14:40:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.424 14:40:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.424 14:40:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.424 14:40:02 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:54.424 14:40:02 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:54.424 14:40:02 -- host/digest.sh@16 -- # runtime=2 00:20:54.424 14:40:02 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:54.424 14:40:02 -- host/digest.sh@138 -- # nvmftestinit 00:20:54.424 14:40:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:54.424 14:40:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.424 14:40:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:54.424 14:40:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:54.424 14:40:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:54.424 14:40:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.424 14:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.424 14:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.424 14:40:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:54.424 14:40:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:54.424 14:40:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:54.424 14:40:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:54.424 14:40:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:54.424 14:40:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:54.424 14:40:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.424 14:40:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.424 14:40:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:54.424 14:40:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:54.424 14:40:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:54.424 14:40:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:54.424 14:40:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:54.424 14:40:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.424 14:40:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:54.424 14:40:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:54.424 14:40:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:54.424 14:40:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:54.424 14:40:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:54.424 14:40:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:54.424 Cannot find device "nvmf_tgt_br" 00:20:54.424 14:40:02 -- nvmf/common.sh@155 -- # true 00:20:54.424 14:40:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.424 Cannot find device "nvmf_tgt_br2" 00:20:54.424 14:40:03 -- nvmf/common.sh@156 -- # true 00:20:54.424 14:40:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:54.424 14:40:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:54.424 Cannot find device "nvmf_tgt_br" 00:20:54.424 14:40:03 -- nvmf/common.sh@158 -- # true 00:20:54.424 14:40:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:54.683 Cannot find device "nvmf_tgt_br2" 00:20:54.683 14:40:03 -- nvmf/common.sh@159 -- # true 00:20:54.683 14:40:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:54.683 14:40:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:54.683 14:40:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.683 14:40:03 -- nvmf/common.sh@162 -- # true 00:20:54.683 14:40:03 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.683 14:40:03 -- nvmf/common.sh@163 -- # true 00:20:54.683 14:40:03 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:54.683 14:40:03 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:54.683 14:40:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:54.683 14:40:03 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:54.683 14:40:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:54.683 14:40:03 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:54.683 14:40:03 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:54.683 14:40:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:54.683 14:40:03 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:54.683 14:40:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:54.683 14:40:03 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:54.683 14:40:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:54.683 14:40:03 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:54.683 14:40:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:54.683 14:40:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:54.683 14:40:03 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:54.683 14:40:03 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:54.683 14:40:03 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:54.683 14:40:03 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:54.683 14:40:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:54.683 14:40:03 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:54.683 14:40:03 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:54.942 14:40:03 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:54.942 14:40:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:54.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:54.942 00:20:54.942 --- 10.0.0.2 ping statistics --- 00:20:54.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.942 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:54.942 14:40:03 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:54.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:54.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:54.942 00:20:54.942 --- 10.0.0.3 ping statistics --- 00:20:54.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.942 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:54.942 14:40:03 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:54.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:54.942 00:20:54.942 --- 10.0.0.1 ping statistics --- 00:20:54.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.942 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:54.942 14:40:03 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.942 14:40:03 -- nvmf/common.sh@422 -- # return 0 00:20:54.942 14:40:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:54.942 14:40:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.942 14:40:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:54.942 14:40:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:54.942 14:40:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.942 14:40:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:54.942 14:40:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:54.942 14:40:03 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:54.942 14:40:03 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:54.942 14:40:03 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:54.942 14:40:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:54.942 14:40:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:54.942 14:40:03 -- common/autotest_common.sh@10 -- # set +x 00:20:54.942 ************************************ 00:20:54.942 START TEST nvmf_digest_clean 00:20:54.942 ************************************ 00:20:54.942 14:40:03 -- common/autotest_common.sh@1111 -- # run_digest 00:20:54.942 14:40:03 -- host/digest.sh@120 -- # local dsa_initiator 00:20:54.942 14:40:03 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:54.942 14:40:03 -- host/digest.sh@121 -- # dsa_initiator=false 00:20:54.942 14:40:03 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:54.942 14:40:03 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:54.942 14:40:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:54.942 14:40:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:54.942 14:40:03 -- common/autotest_common.sh@10 -- # set +x 00:20:54.942 14:40:03 -- nvmf/common.sh@470 -- # nvmfpid=75828 00:20:54.942 14:40:03 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:54.942 14:40:03 -- nvmf/common.sh@471 -- # waitforlisten 75828 00:20:54.942 14:40:03 -- common/autotest_common.sh@817 -- # '[' -z 75828 ']' 00:20:54.942 14:40:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.942 14:40:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:54.942 14:40:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.942 14:40:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:54.942 14:40:03 -- common/autotest_common.sh@10 -- # set +x 00:20:54.942 [2024-04-17 14:40:03.457911] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:54.942 [2024-04-17 14:40:03.458034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.201 [2024-04-17 14:40:03.598078] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.201 [2024-04-17 14:40:03.665064] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.201 [2024-04-17 14:40:03.665127] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.201 [2024-04-17 14:40:03.665141] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.201 [2024-04-17 14:40:03.665163] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.201 [2024-04-17 14:40:03.665189] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.201 [2024-04-17 14:40:03.665230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.136 14:40:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.136 14:40:04 -- common/autotest_common.sh@850 -- # return 0 00:20:56.136 14:40:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:56.136 14:40:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:56.136 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:20:56.136 14:40:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.136 14:40:04 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:56.136 14:40:04 -- host/digest.sh@126 -- # common_target_config 00:20:56.136 14:40:04 -- host/digest.sh@43 -- # rpc_cmd 00:20:56.136 14:40:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.136 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:20:56.136 null0 00:20:56.136 [2024-04-17 14:40:04.577911] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.136 [2024-04-17 14:40:04.602076] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.136 14:40:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.136 14:40:04 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:56.136 14:40:04 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:56.136 14:40:04 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:56.136 14:40:04 -- host/digest.sh@80 -- # rw=randread 00:20:56.136 14:40:04 -- host/digest.sh@80 -- # bs=4096 00:20:56.136 14:40:04 -- host/digest.sh@80 -- # qd=128 00:20:56.136 14:40:04 -- host/digest.sh@80 -- # scan_dsa=false 00:20:56.136 14:40:04 -- host/digest.sh@83 -- # bperfpid=75860 00:20:56.136 14:40:04 -- host/digest.sh@84 -- # waitforlisten 75860 /var/tmp/bperf.sock 00:20:56.136 14:40:04 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:56.136 14:40:04 -- common/autotest_common.sh@817 -- # '[' -z 75860 ']' 00:20:56.136 14:40:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:56.136 14:40:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.136 14:40:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:56.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:56.136 14:40:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.136 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:20:56.136 [2024-04-17 14:40:04.650995] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:56.136 [2024-04-17 14:40:04.651078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75860 ] 00:20:56.395 [2024-04-17 14:40:04.790570] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.395 [2024-04-17 14:40:04.848694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.331 14:40:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.331 14:40:05 -- common/autotest_common.sh@850 -- # return 0 00:20:57.331 14:40:05 -- host/digest.sh@86 -- # false 00:20:57.331 14:40:05 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:57.331 14:40:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:57.591 14:40:05 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.591 14:40:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.850 nvme0n1 00:20:57.850 14:40:06 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:57.850 14:40:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:57.850 Running I/O for 2 seconds... 00:21:00.385 00:21:00.385 Latency(us) 00:21:00.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.385 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:00.385 nvme0n1 : 2.00 14355.11 56.07 0.00 0.00 8909.81 8102.63 24188.74 00:21:00.385 =================================================================================================================== 00:21:00.385 Total : 14355.11 56.07 0.00 0.00 8909.81 8102.63 24188.74 00:21:00.385 0 00:21:00.385 14:40:08 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:00.385 14:40:08 -- host/digest.sh@93 -- # get_accel_stats 00:21:00.385 14:40:08 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:00.385 14:40:08 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:00.385 | select(.opcode=="crc32c") 00:21:00.385 | "\(.module_name) \(.executed)"' 00:21:00.385 14:40:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:00.385 14:40:08 -- host/digest.sh@94 -- # false 00:21:00.385 14:40:08 -- host/digest.sh@94 -- # exp_module=software 00:21:00.385 14:40:08 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:00.385 14:40:08 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:00.385 14:40:08 -- host/digest.sh@98 -- # killprocess 75860 00:21:00.385 14:40:08 -- common/autotest_common.sh@936 -- # '[' -z 75860 ']' 00:21:00.385 14:40:08 -- common/autotest_common.sh@940 -- # kill -0 75860 00:21:00.385 14:40:08 -- common/autotest_common.sh@941 -- # uname 00:21:00.386 14:40:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.386 14:40:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75860 00:21:00.386 14:40:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:00.386 14:40:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:00.386 killing process with pid 75860 00:21:00.386 14:40:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75860' 00:21:00.386 14:40:08 -- common/autotest_common.sh@955 -- # kill 75860 00:21:00.386 Received shutdown signal, test time was about 2.000000 seconds 00:21:00.386 00:21:00.386 Latency(us) 00:21:00.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.386 =================================================================================================================== 00:21:00.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.386 14:40:08 -- common/autotest_common.sh@960 -- # wait 75860 00:21:00.386 14:40:08 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:00.386 14:40:08 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:00.386 14:40:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:00.386 14:40:08 -- host/digest.sh@80 -- # rw=randread 00:21:00.386 14:40:08 -- host/digest.sh@80 -- # bs=131072 00:21:00.386 14:40:08 -- host/digest.sh@80 -- # qd=16 00:21:00.386 14:40:08 -- host/digest.sh@80 -- # scan_dsa=false 00:21:00.386 14:40:08 -- host/digest.sh@83 -- # bperfpid=75920 00:21:00.386 14:40:08 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:00.386 14:40:08 -- host/digest.sh@84 -- # waitforlisten 75920 /var/tmp/bperf.sock 00:21:00.386 14:40:08 -- common/autotest_common.sh@817 -- # '[' -z 75920 ']' 00:21:00.386 14:40:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:00.386 14:40:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:00.386 14:40:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:00.386 14:40:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.386 14:40:08 -- common/autotest_common.sh@10 -- # set +x 00:21:00.386 [2024-04-17 14:40:08.973688] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:00.386 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:00.386 Zero copy mechanism will not be used. 00:21:00.386 [2024-04-17 14:40:08.974679] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75920 ] 00:21:00.645 [2024-04-17 14:40:09.112090] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.645 [2024-04-17 14:40:09.196851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.626 14:40:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.626 14:40:09 -- common/autotest_common.sh@850 -- # return 0 00:21:01.626 14:40:09 -- host/digest.sh@86 -- # false 00:21:01.626 14:40:09 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:01.626 14:40:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:01.884 14:40:10 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.884 14:40:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:02.143 nvme0n1 00:21:02.143 14:40:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:02.143 14:40:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:02.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:02.143 Zero copy mechanism will not be used. 00:21:02.143 Running I/O for 2 seconds... 00:21:04.672 00:21:04.672 Latency(us) 00:21:04.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.672 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:04.672 nvme0n1 : 2.00 7161.81 895.23 0.00 0.00 2230.91 2055.45 7089.80 00:21:04.672 =================================================================================================================== 00:21:04.672 Total : 7161.81 895.23 0.00 0.00 2230.91 2055.45 7089.80 00:21:04.672 0 00:21:04.672 14:40:12 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:04.672 14:40:12 -- host/digest.sh@93 -- # get_accel_stats 00:21:04.672 14:40:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:04.672 14:40:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:04.672 14:40:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:04.672 | select(.opcode=="crc32c") 00:21:04.672 | "\(.module_name) \(.executed)"' 00:21:04.672 14:40:13 -- host/digest.sh@94 -- # false 00:21:04.672 14:40:13 -- host/digest.sh@94 -- # exp_module=software 00:21:04.672 14:40:13 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:04.672 14:40:13 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:04.672 14:40:13 -- host/digest.sh@98 -- # killprocess 75920 00:21:04.672 14:40:13 -- common/autotest_common.sh@936 -- # '[' -z 75920 ']' 00:21:04.672 14:40:13 -- common/autotest_common.sh@940 -- # kill -0 75920 00:21:04.672 14:40:13 -- common/autotest_common.sh@941 -- # uname 00:21:04.672 14:40:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.672 14:40:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75920 00:21:04.672 14:40:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:04.672 14:40:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:04.672 killing process with pid 75920 00:21:04.672 14:40:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75920' 00:21:04.672 14:40:13 -- common/autotest_common.sh@955 -- # kill 75920 00:21:04.672 Received shutdown signal, test time was about 2.000000 seconds 00:21:04.672 00:21:04.672 Latency(us) 00:21:04.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.672 =================================================================================================================== 00:21:04.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.672 14:40:13 -- common/autotest_common.sh@960 -- # wait 75920 00:21:04.672 14:40:13 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:04.672 14:40:13 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:04.672 14:40:13 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:04.672 14:40:13 -- host/digest.sh@80 -- # rw=randwrite 00:21:04.672 14:40:13 -- host/digest.sh@80 -- # bs=4096 00:21:04.672 14:40:13 -- host/digest.sh@80 -- # qd=128 00:21:04.672 14:40:13 -- host/digest.sh@80 -- # scan_dsa=false 00:21:04.672 14:40:13 -- host/digest.sh@83 -- # bperfpid=75980 00:21:04.672 14:40:13 -- host/digest.sh@84 -- # waitforlisten 75980 /var/tmp/bperf.sock 00:21:04.673 14:40:13 -- common/autotest_common.sh@817 -- # '[' -z 75980 ']' 00:21:04.673 14:40:13 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:04.673 14:40:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:04.673 14:40:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:04.673 14:40:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:04.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:04.673 14:40:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:04.673 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:21:04.673 [2024-04-17 14:40:13.274041] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:04.673 [2024-04-17 14:40:13.274761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75980 ] 00:21:04.931 [2024-04-17 14:40:13.410994] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.931 [2024-04-17 14:40:13.481661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.189 14:40:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:05.189 14:40:13 -- common/autotest_common.sh@850 -- # return 0 00:21:05.189 14:40:13 -- host/digest.sh@86 -- # false 00:21:05.189 14:40:13 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:05.189 14:40:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:05.448 14:40:13 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.448 14:40:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.707 nvme0n1 00:21:05.707 14:40:14 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:05.707 14:40:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:05.964 Running I/O for 2 seconds... 00:21:07.865 00:21:07.865 Latency(us) 00:21:07.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.865 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:07.865 nvme0n1 : 2.01 15246.40 59.56 0.00 0.00 8388.25 7864.32 15966.95 00:21:07.865 =================================================================================================================== 00:21:07.865 Total : 15246.40 59.56 0.00 0.00 8388.25 7864.32 15966.95 00:21:07.865 0 00:21:07.865 14:40:16 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:07.865 14:40:16 -- host/digest.sh@93 -- # get_accel_stats 00:21:07.865 14:40:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:07.865 | select(.opcode=="crc32c") 00:21:07.865 | "\(.module_name) \(.executed)"' 00:21:07.865 14:40:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:07.865 14:40:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:08.134 14:40:16 -- host/digest.sh@94 -- # false 00:21:08.134 14:40:16 -- host/digest.sh@94 -- # exp_module=software 00:21:08.134 14:40:16 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:08.134 14:40:16 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:08.134 14:40:16 -- host/digest.sh@98 -- # killprocess 75980 00:21:08.134 14:40:16 -- common/autotest_common.sh@936 -- # '[' -z 75980 ']' 00:21:08.134 14:40:16 -- common/autotest_common.sh@940 -- # kill -0 75980 00:21:08.134 14:40:16 -- common/autotest_common.sh@941 -- # uname 00:21:08.134 14:40:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:08.134 14:40:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75980 00:21:08.396 killing process with pid 75980 00:21:08.396 Received shutdown signal, test time was about 2.000000 seconds 00:21:08.396 00:21:08.396 Latency(us) 00:21:08.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.396 =================================================================================================================== 00:21:08.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.396 14:40:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:08.396 14:40:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:08.396 14:40:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75980' 00:21:08.396 14:40:16 -- common/autotest_common.sh@955 -- # kill 75980 00:21:08.396 14:40:16 -- common/autotest_common.sh@960 -- # wait 75980 00:21:08.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:08.396 14:40:16 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:08.396 14:40:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:08.396 14:40:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:08.396 14:40:16 -- host/digest.sh@80 -- # rw=randwrite 00:21:08.396 14:40:16 -- host/digest.sh@80 -- # bs=131072 00:21:08.396 14:40:16 -- host/digest.sh@80 -- # qd=16 00:21:08.396 14:40:16 -- host/digest.sh@80 -- # scan_dsa=false 00:21:08.396 14:40:16 -- host/digest.sh@83 -- # bperfpid=76034 00:21:08.396 14:40:16 -- host/digest.sh@84 -- # waitforlisten 76034 /var/tmp/bperf.sock 00:21:08.396 14:40:16 -- common/autotest_common.sh@817 -- # '[' -z 76034 ']' 00:21:08.396 14:40:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:08.396 14:40:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:08.396 14:40:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:08.396 14:40:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:08.396 14:40:16 -- common/autotest_common.sh@10 -- # set +x 00:21:08.396 14:40:16 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:08.396 [2024-04-17 14:40:16.978484] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:08.396 [2024-04-17 14:40:16.978590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76034 ] 00:21:08.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:08.397 Zero copy mechanism will not be used. 00:21:08.656 [2024-04-17 14:40:17.115300] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.656 [2024-04-17 14:40:17.182151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.591 14:40:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:09.591 14:40:17 -- common/autotest_common.sh@850 -- # return 0 00:21:09.591 14:40:17 -- host/digest.sh@86 -- # false 00:21:09.591 14:40:17 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:09.591 14:40:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:09.849 14:40:18 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:09.849 14:40:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:10.108 nvme0n1 00:21:10.108 14:40:18 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:10.108 14:40:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:10.366 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:10.366 Zero copy mechanism will not be used. 00:21:10.366 Running I/O for 2 seconds... 00:21:12.268 00:21:12.268 Latency(us) 00:21:12.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.268 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:12.268 nvme0n1 : 2.00 6222.12 777.77 0.00 0.00 2565.97 1921.40 5630.14 00:21:12.268 =================================================================================================================== 00:21:12.268 Total : 6222.12 777.77 0.00 0.00 2565.97 1921.40 5630.14 00:21:12.268 0 00:21:12.268 14:40:20 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:12.268 14:40:20 -- host/digest.sh@93 -- # get_accel_stats 00:21:12.268 14:40:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:12.268 | select(.opcode=="crc32c") 00:21:12.268 | "\(.module_name) \(.executed)"' 00:21:12.268 14:40:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:12.268 14:40:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:12.526 14:40:21 -- host/digest.sh@94 -- # false 00:21:12.526 14:40:21 -- host/digest.sh@94 -- # exp_module=software 00:21:12.526 14:40:21 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:12.526 14:40:21 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:12.526 14:40:21 -- host/digest.sh@98 -- # killprocess 76034 00:21:12.526 14:40:21 -- common/autotest_common.sh@936 -- # '[' -z 76034 ']' 00:21:12.526 14:40:21 -- common/autotest_common.sh@940 -- # kill -0 76034 00:21:12.526 14:40:21 -- common/autotest_common.sh@941 -- # uname 00:21:12.526 14:40:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:12.526 14:40:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76034 00:21:12.526 killing process with pid 76034 00:21:12.526 Received shutdown signal, test time was about 2.000000 seconds 00:21:12.526 00:21:12.526 Latency(us) 00:21:12.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.526 =================================================================================================================== 00:21:12.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.526 14:40:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:12.526 14:40:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:12.526 14:40:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76034' 00:21:12.527 14:40:21 -- common/autotest_common.sh@955 -- # kill 76034 00:21:12.527 14:40:21 -- common/autotest_common.sh@960 -- # wait 76034 00:21:12.785 14:40:21 -- host/digest.sh@132 -- # killprocess 75828 00:21:12.785 14:40:21 -- common/autotest_common.sh@936 -- # '[' -z 75828 ']' 00:21:12.785 14:40:21 -- common/autotest_common.sh@940 -- # kill -0 75828 00:21:12.785 14:40:21 -- common/autotest_common.sh@941 -- # uname 00:21:12.785 14:40:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:12.785 14:40:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75828 00:21:12.785 killing process with pid 75828 00:21:12.785 14:40:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:12.785 14:40:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:12.785 14:40:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75828' 00:21:12.785 14:40:21 -- common/autotest_common.sh@955 -- # kill 75828 00:21:12.785 14:40:21 -- common/autotest_common.sh@960 -- # wait 75828 00:21:13.044 00:21:13.044 real 0m18.103s 00:21:13.044 user 0m35.631s 00:21:13.044 sys 0m4.375s 00:21:13.044 14:40:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:13.044 ************************************ 00:21:13.044 END TEST nvmf_digest_clean 00:21:13.044 ************************************ 00:21:13.044 14:40:21 -- common/autotest_common.sh@10 -- # set +x 00:21:13.044 14:40:21 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:13.044 14:40:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:13.044 14:40:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:13.044 14:40:21 -- common/autotest_common.sh@10 -- # set +x 00:21:13.044 ************************************ 00:21:13.044 START TEST nvmf_digest_error 00:21:13.044 ************************************ 00:21:13.044 14:40:21 -- common/autotest_common.sh@1111 -- # run_digest_error 00:21:13.044 14:40:21 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:13.044 14:40:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:13.044 14:40:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:13.044 14:40:21 -- common/autotest_common.sh@10 -- # set +x 00:21:13.044 14:40:21 -- nvmf/common.sh@470 -- # nvmfpid=76121 00:21:13.044 14:40:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:13.044 14:40:21 -- nvmf/common.sh@471 -- # waitforlisten 76121 00:21:13.044 14:40:21 -- common/autotest_common.sh@817 -- # '[' -z 76121 ']' 00:21:13.044 14:40:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.044 14:40:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:13.044 14:40:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.044 14:40:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:13.044 14:40:21 -- common/autotest_common.sh@10 -- # set +x 00:21:13.303 [2024-04-17 14:40:21.678395] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:13.303 [2024-04-17 14:40:21.678520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.303 [2024-04-17 14:40:21.820997] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.303 [2024-04-17 14:40:21.877790] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.303 [2024-04-17 14:40:21.877861] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.303 [2024-04-17 14:40:21.877881] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.303 [2024-04-17 14:40:21.877894] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.303 [2024-04-17 14:40:21.877905] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.303 [2024-04-17 14:40:21.877977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.257 14:40:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:14.257 14:40:22 -- common/autotest_common.sh@850 -- # return 0 00:21:14.257 14:40:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:14.257 14:40:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:14.257 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:21:14.257 14:40:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.257 14:40:22 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:14.257 14:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.257 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:21:14.257 [2024-04-17 14:40:22.670536] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:14.257 14:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.257 14:40:22 -- host/digest.sh@105 -- # common_target_config 00:21:14.257 14:40:22 -- host/digest.sh@43 -- # rpc_cmd 00:21:14.257 14:40:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.257 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:21:14.257 null0 00:21:14.257 [2024-04-17 14:40:22.740525] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.257 [2024-04-17 14:40:22.764653] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.257 14:40:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.257 14:40:22 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:14.257 14:40:22 -- host/digest.sh@54 -- # local rw bs qd 00:21:14.257 14:40:22 -- host/digest.sh@56 -- # rw=randread 00:21:14.257 14:40:22 -- host/digest.sh@56 -- # bs=4096 00:21:14.257 14:40:22 -- host/digest.sh@56 -- # qd=128 00:21:14.257 14:40:22 -- host/digest.sh@58 -- # bperfpid=76159 00:21:14.257 14:40:22 -- host/digest.sh@60 -- # waitforlisten 76159 /var/tmp/bperf.sock 00:21:14.257 14:40:22 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:14.257 14:40:22 -- common/autotest_common.sh@817 -- # '[' -z 76159 ']' 00:21:14.257 14:40:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:14.257 14:40:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:14.257 14:40:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:14.257 14:40:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.257 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:21:14.257 [2024-04-17 14:40:22.821674] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:14.257 [2024-04-17 14:40:22.821799] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76159 ] 00:21:14.515 [2024-04-17 14:40:22.961616] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.515 [2024-04-17 14:40:23.037741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.449 14:40:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:15.449 14:40:23 -- common/autotest_common.sh@850 -- # return 0 00:21:15.449 14:40:23 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:15.449 14:40:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:15.707 14:40:24 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:15.707 14:40:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.707 14:40:24 -- common/autotest_common.sh@10 -- # set +x 00:21:15.707 14:40:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.707 14:40:24 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.707 14:40:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.965 nvme0n1 00:21:15.965 14:40:24 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:15.965 14:40:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.965 14:40:24 -- common/autotest_common.sh@10 -- # set +x 00:21:15.965 14:40:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.965 14:40:24 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:15.965 14:40:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:16.223 Running I/O for 2 seconds... 00:21:16.223 [2024-04-17 14:40:24.627173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.627231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.627247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.644931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.644985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.645000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.662924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.662979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.662993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.681636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.681702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.681726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.699545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.699590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.699605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.717303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.717348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.717363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.735109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.735158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.735173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.752925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.752981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.752996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.770709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.770753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.770767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.788847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.788899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.788915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.806680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.806727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.806741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.223 [2024-04-17 14:40:24.825686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.223 [2024-04-17 14:40:24.825733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.223 [2024-04-17 14:40:24.825748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.843790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.482 [2024-04-17 14:40:24.843831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.482 [2024-04-17 14:40:24.843844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.861627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.482 [2024-04-17 14:40:24.861677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.482 [2024-04-17 14:40:24.861691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.879496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.482 [2024-04-17 14:40:24.879547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.482 [2024-04-17 14:40:24.879561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.897789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.482 [2024-04-17 14:40:24.897834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.482 [2024-04-17 14:40:24.897848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.915606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.482 [2024-04-17 14:40:24.915651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.482 [2024-04-17 14:40:24.915665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.933342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.482 [2024-04-17 14:40:24.933388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.482 [2024-04-17 14:40:24.933402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.951101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.482 [2024-04-17 14:40:24.951142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.482 [2024-04-17 14:40:24.951156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.968889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.482 [2024-04-17 14:40:24.968942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.482 [2024-04-17 14:40:24.968970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.482 [2024-04-17 14:40:24.986794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.483 [2024-04-17 14:40:24.986859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.483 [2024-04-17 14:40:24.986874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.483 [2024-04-17 14:40:25.004612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.483 [2024-04-17 14:40:25.004656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.483 [2024-04-17 14:40:25.004669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.483 [2024-04-17 14:40:25.022362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.483 [2024-04-17 14:40:25.022412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.483 [2024-04-17 14:40:25.022426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.483 [2024-04-17 14:40:25.040137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.483 [2024-04-17 14:40:25.040184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.483 [2024-04-17 14:40:25.040198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.483 [2024-04-17 14:40:25.057859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.483 [2024-04-17 14:40:25.057905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.483 [2024-04-17 14:40:25.057919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.483 [2024-04-17 14:40:25.075835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.483 [2024-04-17 14:40:25.075875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.483 [2024-04-17 14:40:25.075888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.093651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.093691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.093704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.111335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.111374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.111387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.129031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.129074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.129088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.146729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.146770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.146784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.164439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.164481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.164495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.182216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.182259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.182272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.199970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.200010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.200024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.217676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.217723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.217736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.235365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.235403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.235417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.253212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.253267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.253288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.271344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.271383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.271396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.289058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.289101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.289114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.306854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.306896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.306910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.324696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.324736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.324750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.742 [2024-04-17 14:40:25.342503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:16.742 [2024-04-17 14:40:25.342543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.742 [2024-04-17 14:40:25.342557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.360163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.360203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.360216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.377895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.377936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.377961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.395620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.395661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.395674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.413449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.413488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.413501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.431252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.431289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.431303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.449012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.449049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.449062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.466914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.466960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.466975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.484766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.484805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.484820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.502536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.502572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.502585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.520291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.520328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.520341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.537977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.538014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.538027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.555845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.555884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.555897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.574010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.574051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.574064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.001 [2024-04-17 14:40:25.591894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.001 [2024-04-17 14:40:25.591956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.001 [2024-04-17 14:40:25.591971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.260 [2024-04-17 14:40:25.609910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.260 [2024-04-17 14:40:25.609984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.260 [2024-04-17 14:40:25.609999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.260 [2024-04-17 14:40:25.628121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.260 [2024-04-17 14:40:25.628189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.260 [2024-04-17 14:40:25.628203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.260 [2024-04-17 14:40:25.646400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.260 [2024-04-17 14:40:25.646469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.260 [2024-04-17 14:40:25.646484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.260 [2024-04-17 14:40:25.664693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.260 [2024-04-17 14:40:25.664762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.260 [2024-04-17 14:40:25.664777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.260 [2024-04-17 14:40:25.683018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.260 [2024-04-17 14:40:25.683077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.260 [2024-04-17 14:40:25.683091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.260 [2024-04-17 14:40:25.701463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.260 [2024-04-17 14:40:25.701516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.260 [2024-04-17 14:40:25.701531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.260 [2024-04-17 14:40:25.719804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.260 [2024-04-17 14:40:25.719870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.260 [2024-04-17 14:40:25.719885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.260 [2024-04-17 14:40:25.738130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.260 [2024-04-17 14:40:25.738197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.260 [2024-04-17 14:40:25.738212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.261 [2024-04-17 14:40:25.764448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.261 [2024-04-17 14:40:25.764518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.261 [2024-04-17 14:40:25.764533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.261 [2024-04-17 14:40:25.782780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.261 [2024-04-17 14:40:25.782850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.261 [2024-04-17 14:40:25.782865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.261 [2024-04-17 14:40:25.800979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.261 [2024-04-17 14:40:25.801047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.261 [2024-04-17 14:40:25.801061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.261 [2024-04-17 14:40:25.819137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.261 [2024-04-17 14:40:25.819203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.261 [2024-04-17 14:40:25.819219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.261 [2024-04-17 14:40:25.837658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.261 [2024-04-17 14:40:25.837727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.261 [2024-04-17 14:40:25.837741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.261 [2024-04-17 14:40:25.856045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.261 [2024-04-17 14:40:25.856119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.261 [2024-04-17 14:40:25.856134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.520 [2024-04-17 14:40:25.874406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.520 [2024-04-17 14:40:25.874476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.520 [2024-04-17 14:40:25.874490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.520 [2024-04-17 14:40:25.893311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.520 [2024-04-17 14:40:25.893404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.520 [2024-04-17 14:40:25.893423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.520 [2024-04-17 14:40:25.913200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.520 [2024-04-17 14:40:25.913290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.520 [2024-04-17 14:40:25.913312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.520 [2024-04-17 14:40:25.931886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.520 [2024-04-17 14:40:25.931974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.520 [2024-04-17 14:40:25.931991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.520 [2024-04-17 14:40:25.952329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.520 [2024-04-17 14:40:25.952412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.520 [2024-04-17 14:40:25.952430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.521 [2024-04-17 14:40:25.971960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.521 [2024-04-17 14:40:25.972035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.521 [2024-04-17 14:40:25.972051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.521 [2024-04-17 14:40:25.991720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.521 [2024-04-17 14:40:25.991832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.521 [2024-04-17 14:40:25.991860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.521 [2024-04-17 14:40:26.011351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.521 [2024-04-17 14:40:26.011431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.521 [2024-04-17 14:40:26.011447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.521 [2024-04-17 14:40:26.030462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.521 [2024-04-17 14:40:26.030561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.521 [2024-04-17 14:40:26.030587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.521 [2024-04-17 14:40:26.050520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.521 [2024-04-17 14:40:26.050633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.521 [2024-04-17 14:40:26.050663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.521 [2024-04-17 14:40:26.069431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.521 [2024-04-17 14:40:26.069515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.521 [2024-04-17 14:40:26.069532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.521 [2024-04-17 14:40:26.088063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.521 [2024-04-17 14:40:26.088131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.521 [2024-04-17 14:40:26.088147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.521 [2024-04-17 14:40:26.106154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.521 [2024-04-17 14:40:26.106219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.521 [2024-04-17 14:40:26.106234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.123923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.123980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.123994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.142626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.142691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.142706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.161381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.161445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.161460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.179217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.179261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.179276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.196916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.196968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.196983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.215573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.215623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.215637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.233489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.233542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.233555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.251815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.251863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.251877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.270379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.270427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.270441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.781 [2024-04-17 14:40:26.288611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.781 [2024-04-17 14:40:26.288653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.781 [2024-04-17 14:40:26.288667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.782 [2024-04-17 14:40:26.306488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.782 [2024-04-17 14:40:26.306535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.782 [2024-04-17 14:40:26.306549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.782 [2024-04-17 14:40:26.324278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.782 [2024-04-17 14:40:26.324323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.782 [2024-04-17 14:40:26.324336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.782 [2024-04-17 14:40:26.342007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.782 [2024-04-17 14:40:26.342051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.782 [2024-04-17 14:40:26.342066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.782 [2024-04-17 14:40:26.359772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.782 [2024-04-17 14:40:26.359817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.782 [2024-04-17 14:40:26.359832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.782 [2024-04-17 14:40:26.377715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:17.782 [2024-04-17 14:40:26.377783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.782 [2024-04-17 14:40:26.377797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.395621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.395683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.395698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.413369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.413415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.413428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.431135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.431186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.431200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.449647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.449696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.449711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.467555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.467603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.467617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.485633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.485684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.485700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.503884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.503934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.503959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.522032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.522098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.522113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.540263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.540344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.540360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.559144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.559196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.559212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.577794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.577851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.577867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 [2024-04-17 14:40:26.596182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa5460) 00:21:18.041 [2024-04-17 14:40:26.596257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.041 [2024-04-17 14:40:26.596275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.041 00:21:18.041 Latency(us) 00:21:18.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.041 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:18.041 nvme0n1 : 2.01 13936.80 54.44 0.00 0.00 9178.10 8698.41 35508.60 00:21:18.041 =================================================================================================================== 00:21:18.041 Total : 13936.80 54.44 0.00 0.00 9178.10 8698.41 35508.60 00:21:18.041 0 00:21:18.041 14:40:26 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:18.041 14:40:26 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:18.041 14:40:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:18.041 14:40:26 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:18.041 | .driver_specific 00:21:18.041 | .nvme_error 00:21:18.041 | .status_code 00:21:18.041 | .command_transient_transport_error' 00:21:18.300 14:40:26 -- host/digest.sh@71 -- # (( 109 > 0 )) 00:21:18.300 14:40:26 -- host/digest.sh@73 -- # killprocess 76159 00:21:18.300 14:40:26 -- common/autotest_common.sh@936 -- # '[' -z 76159 ']' 00:21:18.300 14:40:26 -- common/autotest_common.sh@940 -- # kill -0 76159 00:21:18.300 14:40:26 -- common/autotest_common.sh@941 -- # uname 00:21:18.300 14:40:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:18.300 14:40:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76159 00:21:18.300 14:40:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:18.300 14:40:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:18.300 killing process with pid 76159 00:21:18.300 14:40:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76159' 00:21:18.300 14:40:26 -- common/autotest_common.sh@955 -- # kill 76159 00:21:18.300 Received shutdown signal, test time was about 2.000000 seconds 00:21:18.300 00:21:18.300 Latency(us) 00:21:18.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.300 =================================================================================================================== 00:21:18.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.300 14:40:26 -- common/autotest_common.sh@960 -- # wait 76159 00:21:18.558 14:40:27 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:18.558 14:40:27 -- host/digest.sh@54 -- # local rw bs qd 00:21:18.558 14:40:27 -- host/digest.sh@56 -- # rw=randread 00:21:18.558 14:40:27 -- host/digest.sh@56 -- # bs=131072 00:21:18.558 14:40:27 -- host/digest.sh@56 -- # qd=16 00:21:18.558 14:40:27 -- host/digest.sh@58 -- # bperfpid=76220 00:21:18.558 14:40:27 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:18.558 14:40:27 -- host/digest.sh@60 -- # waitforlisten 76220 /var/tmp/bperf.sock 00:21:18.558 14:40:27 -- common/autotest_common.sh@817 -- # '[' -z 76220 ']' 00:21:18.558 14:40:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:18.558 14:40:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:18.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:18.558 14:40:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:18.558 14:40:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:18.558 14:40:27 -- common/autotest_common.sh@10 -- # set +x 00:21:18.558 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:18.558 Zero copy mechanism will not be used. 00:21:18.558 [2024-04-17 14:40:27.137522] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:18.558 [2024-04-17 14:40:27.137614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76220 ] 00:21:18.817 [2024-04-17 14:40:27.271876] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.817 [2024-04-17 14:40:27.330415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.817 14:40:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.817 14:40:27 -- common/autotest_common.sh@850 -- # return 0 00:21:18.817 14:40:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.817 14:40:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:19.384 14:40:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:19.384 14:40:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.384 14:40:27 -- common/autotest_common.sh@10 -- # set +x 00:21:19.384 14:40:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.384 14:40:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.385 14:40:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.643 nvme0n1 00:21:19.643 14:40:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:19.643 14:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.643 14:40:28 -- common/autotest_common.sh@10 -- # set +x 00:21:19.643 14:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.643 14:40:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:19.643 14:40:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.643 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:19.643 Zero copy mechanism will not be used. 00:21:19.643 Running I/O for 2 seconds... 00:21:19.643 [2024-04-17 14:40:28.219257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.643 [2024-04-17 14:40:28.219316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.643 [2024-04-17 14:40:28.219333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.643 [2024-04-17 14:40:28.223868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.643 [2024-04-17 14:40:28.223928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.643 [2024-04-17 14:40:28.223944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.643 [2024-04-17 14:40:28.228612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.643 [2024-04-17 14:40:28.228670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.643 [2024-04-17 14:40:28.228685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.643 [2024-04-17 14:40:28.233351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.643 [2024-04-17 14:40:28.233393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.643 [2024-04-17 14:40:28.233407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.643 [2024-04-17 14:40:28.238504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.643 [2024-04-17 14:40:28.238552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.643 [2024-04-17 14:40:28.238567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.643 [2024-04-17 14:40:28.243199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.643 [2024-04-17 14:40:28.243245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.643 [2024-04-17 14:40:28.243260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.247820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.247865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.247879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.252368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.252411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.252434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.256895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.256937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.256965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.261454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.261511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.261534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.266585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.266632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.266647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.271349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.271394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.271409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.276058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.276099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.276113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.280723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.280765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.280780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.285273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.285327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.285341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.289923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.289977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.289993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.294577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.294618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.294632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.299917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.299979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.299995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.304735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.304781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.304796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.310027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.310117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.310140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.315735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.315808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.315847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.321143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.321189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.321204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.325797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.325872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.325887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.331041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.331086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.331101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.335911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.335984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.336001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.340916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.340975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.340991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.345760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.345835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.345875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.350568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.350612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.904 [2024-04-17 14:40:28.350627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.904 [2024-04-17 14:40:28.355660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.904 [2024-04-17 14:40:28.355721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.355736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.360752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.360814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.360829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.365530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.365576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.365590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.370427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.370473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.370488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.375472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.375518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.375532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.380114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.380156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.380171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.385138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.385207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.385230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.391460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.391529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.391553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.397547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.397593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.397608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.402314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.402379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.402403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.407031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.407072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.407103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.411734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.411776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.411791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.416251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.416291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.416305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.421306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.421362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.421379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.426020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.426061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.426076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.430748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.430794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.430809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.435653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.435705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.435721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.440460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.440519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.440544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.445153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.445197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.445212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.450394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.450454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.450476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.455077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.455123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.455139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.459759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.459803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.459817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.464667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.464714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.464729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.469874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.469962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.469985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.475517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.475571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.475590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.481170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.481255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.481275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.486803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.905 [2024-04-17 14:40:28.486872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.905 [2024-04-17 14:40:28.486891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.905 [2024-04-17 14:40:28.492377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.906 [2024-04-17 14:40:28.492429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.906 [2024-04-17 14:40:28.492449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.906 [2024-04-17 14:40:28.498344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.906 [2024-04-17 14:40:28.498398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.906 [2024-04-17 14:40:28.498418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.906 [2024-04-17 14:40:28.504031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:19.906 [2024-04-17 14:40:28.504085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.906 [2024-04-17 14:40:28.504105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.510116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.510172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.510194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.515760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.515817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.515838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.521438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.521492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.521512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.527206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.527264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.527284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.532926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.533008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.533028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.538767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.538824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.538844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.544574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.544646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.544666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.550477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.550578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.550604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.556378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.556484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.556510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.561154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.561230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.561247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.565861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.565923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.565939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.166 [2024-04-17 14:40:28.570586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.166 [2024-04-17 14:40:28.570647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.166 [2024-04-17 14:40:28.570662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.575216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.575271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.575286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.579953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.580038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.580053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.584657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.584746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.584760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.589940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.590048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.590064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.594799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.594862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.594878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.599644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.599711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.599728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.604800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.604867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.604884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.609935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.610076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.610104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.614803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.614861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.614877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.620299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.620372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.620389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.625500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.625569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.625585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.630358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.630450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.630476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.635243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.635316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.635334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.640129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.640190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.640206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.645273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.645374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.645392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.650798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.650867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.650899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.655805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.655884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.655899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.661242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.661361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.661387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.667248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.667350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.667377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.672523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.672597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.672612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.677243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.677316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.677332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.682257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.682330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.682353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.687567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.687638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.687654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.692350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.692413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.692428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.697341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.697412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.697427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.702195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.702276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.167 [2024-04-17 14:40:28.702299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.167 [2024-04-17 14:40:28.707252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.167 [2024-04-17 14:40:28.707326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.707343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.712067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.712128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.712144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.717013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.717077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.717093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.722099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.722186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.722203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.727099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.727167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.727183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.732213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.732288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.732312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.737052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.737128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.737144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.742336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.742397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.742413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.748364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.748438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.748465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.754709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.754790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.754825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.759581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.759638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.759654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.168 [2024-04-17 14:40:28.764604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.168 [2024-04-17 14:40:28.764670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.168 [2024-04-17 14:40:28.764686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.769351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.769422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.769438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.774151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.774210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.774226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.778997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.779056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.779072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.783673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.783729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.783744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.788548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.788602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.788618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.793346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.793400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.793416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.797981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.798022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.798036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.802487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.802529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.802543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.807080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.807120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.807134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.811699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.811739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.811754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.816254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.816294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.816308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.820772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.820812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.820826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.825321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.825361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.825374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.829875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.829915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.829929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.834438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.834479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.834493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.839022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.839061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.839075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.843609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.843649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.843663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.848212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.848253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.848267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.852765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.852805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.852819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.857343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.857382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.857395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.861937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.861988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.862003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.866600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.429 [2024-04-17 14:40:28.866638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.429 [2024-04-17 14:40:28.866652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.429 [2024-04-17 14:40:28.871142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.871178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.871192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.875764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.875804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.875818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.880389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.880427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.880441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.885006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.885044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.885059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.889902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.889960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.889977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.894672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.894715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.894731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.899458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.899518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.899540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.904334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.904377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.904392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.909070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.909110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.909124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.913740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.913780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.913795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.918462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.918502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.918517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.923487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.923529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.923544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.928246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.928293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.928314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.933118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.933159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.933174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.938053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.938095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.938110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.942703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.942743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.942775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.947377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.947417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.947432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.951916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.951971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.951987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.956519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.956560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.956574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.961135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.961174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.961189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.965695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.965738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.965753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.970386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.970428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.970442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.975040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.975081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.975095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.979632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.979674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.979688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.984259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.984298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.984312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.988858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.988898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.988912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.993362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.993403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.993417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:28.997975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:28.998029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:28.998043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:29.003168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:29.003217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:29.003232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:29.008973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:29.009049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:29.009074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:29.015578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:29.015651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:29.015676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:29.022410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:29.022484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:29.022508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.430 [2024-04-17 14:40:29.028816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.430 [2024-04-17 14:40:29.028911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.430 [2024-04-17 14:40:29.028936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.035709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.035789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.035816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.041338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.041387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.041403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.046501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.046548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.046563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.051189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.051234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.051249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.056372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.056418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.056433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.061095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.061136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.061150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.065783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.065824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.065838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.070355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.070396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.070409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.075048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.075103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.075133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.079844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.079917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.079947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.084509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.084549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.084563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.089060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.089098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.089112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.093676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.691 [2024-04-17 14:40:29.093717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.691 [2024-04-17 14:40:29.093731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.691 [2024-04-17 14:40:29.098295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.098335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.098348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.102876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.102933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.102960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.107473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.107513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.107527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.112140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.112185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.112199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.116706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.116748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.116761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.121330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.121372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.121386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.125927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.125980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.125996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.130419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.130458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.130472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.135047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.135085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.135098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.139634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.139673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.139687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.144269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.144309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.144323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.148863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.148906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.148921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.153599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.153639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.153653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.158228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.158269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.158283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.162812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.162886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.162917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.167449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.167490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.167504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.172127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.172166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.172179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.176741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.176784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.176798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.181394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.181434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.181448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.185889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.185929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.185943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.190439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.190480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.190494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.195054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.195093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.195106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.199604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.199644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.692 [2024-04-17 14:40:29.199659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.692 [2024-04-17 14:40:29.204172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.692 [2024-04-17 14:40:29.204211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.204225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.208754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.208794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.208808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.213290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.213364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.213379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.217961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.218000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.218014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.222554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.222595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.222609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.227158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.227197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.227211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.231764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.231809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.231824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.236382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.236437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.236453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.241156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.241221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.241252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.245883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.245939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.245974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.250533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.250588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.250604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.255241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.255295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.255310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.259870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.259928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.259944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.264455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.264510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.264525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.269075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.269130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.269146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.273679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.273734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.273749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.278317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.278371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.278387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.283124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.283182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.283197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.693 [2024-04-17 14:40:29.287747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.693 [2024-04-17 14:40:29.287808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.693 [2024-04-17 14:40:29.287830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.292444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.292509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.292525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.297056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.297117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.297133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.301741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.301807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.301823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.306391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.306456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.306473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.310959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.311020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.311036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.315657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.315720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.315736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.320354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.320420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.320436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.325082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.325143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.325159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.329730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.329787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.329803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.334431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.334502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.334517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.339060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.339119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.339134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.343704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.343763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.343779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.348393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.348453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.348468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.352976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.353033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.353049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.357617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.357678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.357693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.362296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.954 [2024-04-17 14:40:29.362352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.954 [2024-04-17 14:40:29.362368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.954 [2024-04-17 14:40:29.366960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.367016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.367031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.371518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.371576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.371591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.376192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.376251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.376266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.380794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.380853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.380868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.385451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.385511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.385527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.390028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.390084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.390099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.394571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.394630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.394646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.399147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.399201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.399218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.403771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.403830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.403846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.408423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.408482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.408498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.413082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.413140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.413155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.417732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.417790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.417805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.422246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.422303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.422318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.426880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.426938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.426972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.431526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.431582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.431599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.436154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.436209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.436224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.440780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.440835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.440850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.445436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.445492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.445508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.450022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.450077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.450093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.454729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.454789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.454805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.459402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.459460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.459476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.464067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.464123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.464139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.468753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.468810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.468826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.473443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.473501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.473516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.478060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.478115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.478130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.482638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.482694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.482709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.487319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.487366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.955 [2024-04-17 14:40:29.487385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.955 [2024-04-17 14:40:29.492006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.955 [2024-04-17 14:40:29.492046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.492060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.496491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.496532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.496546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.501078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.501117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.501131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.505629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.505670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.505685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.510216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.510256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.510270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.514818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.514859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.514874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.519338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.519378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.519391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.523903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.523944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.523974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.528559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.528602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.528616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.533131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.533170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.533184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.537602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.537642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.537656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.542165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.542203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.542217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.546712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.546753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.546768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.956 [2024-04-17 14:40:29.551271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:20.956 [2024-04-17 14:40:29.551311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.956 [2024-04-17 14:40:29.551326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.555865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.555906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.555921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.560415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.560455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.560470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.565055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.565093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.565107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.569629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.569669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.569684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.574191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.574230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.574244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.578737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.578778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.578792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.583383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.583423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.583437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.587998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.588038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.588052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.592590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.592632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.592646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.597093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.597132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.597146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.601786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.601828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.601842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.606457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.606498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.606513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.611044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.611084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.611098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.615623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.615664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.615679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.620197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.620237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.620251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.624853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.624895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.624909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.629381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.629439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.629454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.634015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.634057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.634071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.638629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.638669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.638683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.643277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.643317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.643331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.647862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.647903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.647917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.652523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.652563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.652578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.657173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.657243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.657274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.661869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.661909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.661924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.666553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.666594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.217 [2024-04-17 14:40:29.666609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.217 [2024-04-17 14:40:29.671180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.217 [2024-04-17 14:40:29.671219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.671233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.675901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.675941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.675986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.680535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.680576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.680590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.685158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.685197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.685211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.689776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.689814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.689844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.694414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.694454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.694468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.699088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.699126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.699157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.703677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.703717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.703747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.708279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.708318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.708348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.712829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.712868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.712899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.717525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.717566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.717580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.722174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.722229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.722243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.726824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.726864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.726894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.731538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.731578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.731609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.736088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.736126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.736156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.740751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.740791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.740822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.745356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.745397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.745411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.749954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.750005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.750019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.754546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.754585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.754630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.759190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.759230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.759244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.763825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.763867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.763881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.768285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.768325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.768339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.772766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.772805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.772819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.777395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.777447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.777462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.781941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.782010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.782023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.786600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.786640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.786655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.791245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.791285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.218 [2024-04-17 14:40:29.791300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.218 [2024-04-17 14:40:29.795911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.218 [2024-04-17 14:40:29.795982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.219 [2024-04-17 14:40:29.795998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.219 [2024-04-17 14:40:29.800457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.219 [2024-04-17 14:40:29.800498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.219 [2024-04-17 14:40:29.800512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.219 [2024-04-17 14:40:29.805015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.219 [2024-04-17 14:40:29.805054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.219 [2024-04-17 14:40:29.805068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.219 [2024-04-17 14:40:29.809555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.219 [2024-04-17 14:40:29.809596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.219 [2024-04-17 14:40:29.809611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.219 [2024-04-17 14:40:29.814200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.219 [2024-04-17 14:40:29.814240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.219 [2024-04-17 14:40:29.814254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.818850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.818891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.818906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.823472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.823512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.823528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.828037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.828077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.828091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.832547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.832595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.832609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.837142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.837181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.837195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.841710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.841748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.841779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.846348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.846387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.846401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.851033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.851072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.851086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.855623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.855664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.855678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.860360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.860400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.860430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.865008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.865044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.865074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.869535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.869576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.869590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.874070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.874109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.874139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.878674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.878715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.878761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.883290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.883330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.883344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.887937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.888008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.888023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.892661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.892702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.892716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.897307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.897363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.897377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.902031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.902095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.902113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.906945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.907016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.907032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.911564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.911604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.911635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.916240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.916281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.916296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.920869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.920911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.920926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.925441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.925482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.925496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.930100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.930140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.930154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.934607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.934652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.934683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.939389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.939434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.939449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.943926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.944000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.944015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.948609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.479 [2024-04-17 14:40:29.948670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.479 [2024-04-17 14:40:29.948686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.479 [2024-04-17 14:40:29.953288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.953368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.953384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.958020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.958081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.958096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.962591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.962647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.962664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.967240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.967295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.967310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.971892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.971981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.971999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.976604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.976659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.976674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.981424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.981479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.981494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.986199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.986253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.986269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.990832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.990888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.990903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:29.995484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:29.995543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:29.995558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.000136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.000187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.000217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.004761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.004820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.004835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.009832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.009899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.009915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.014563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.014623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.014639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.019343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.019404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.019419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.024044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.024102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.024117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.028776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.028842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.028858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.033513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.033569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.033584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.038181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.038234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.038250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.042701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.042756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.042773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.047345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.047404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.047420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.052045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.052100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.052116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.056674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.056731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.056746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.061326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.061380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.061397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.065989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.066041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.066057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.070583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.070625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.070640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.075120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.075159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.075173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.480 [2024-04-17 14:40:30.079716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.480 [2024-04-17 14:40:30.079756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.480 [2024-04-17 14:40:30.079770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.739 [2024-04-17 14:40:30.084370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.739 [2024-04-17 14:40:30.084410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.739 [2024-04-17 14:40:30.084424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.739 [2024-04-17 14:40:30.088978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.739 [2024-04-17 14:40:30.089017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.089031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.093564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.093606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.093621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.098217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.098257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.098270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.102808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.102850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.102864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.107408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.107449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.107463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.112020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.112058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.112072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.116487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.116528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.116542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.121034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.121073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.121086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.125745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.125785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.125800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.130258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.130298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.130312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.134792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.134833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.134847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.139368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.139409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.139423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.143998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.144037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.144051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.148551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.148591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.148606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.153098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.153137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.153151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.157663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.157703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.157717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.162305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.162346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.162359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.166926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.166976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.166991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.171498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.171539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.171552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.176048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.176086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.176100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.180635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.180676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.180691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.185221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.185260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.185273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.189834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.189874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.189888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.194546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.194617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.194631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.199237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.199277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.199291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.203908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.203960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.203974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.208505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.208562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.740 [2024-04-17 14:40:30.208575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.740 [2024-04-17 14:40:30.212997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e8530) 00:21:21.740 [2024-04-17 14:40:30.213051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.741 [2024-04-17 14:40:30.213064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.741 00:21:21.741 Latency(us) 00:21:21.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.741 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:21.741 nvme0n1 : 2.00 6475.59 809.45 0.00 0.00 2467.18 2115.03 7000.44 00:21:21.741 =================================================================================================================== 00:21:21.741 Total : 6475.59 809.45 0.00 0.00 2467.18 2115.03 7000.44 00:21:21.741 0 00:21:21.741 14:40:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:21.741 14:40:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:21.741 | .driver_specific 00:21:21.741 | .nvme_error 00:21:21.741 | .status_code 00:21:21.741 | .command_transient_transport_error' 00:21:21.741 14:40:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:21.741 14:40:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:22.000 14:40:30 -- host/digest.sh@71 -- # (( 418 > 0 )) 00:21:22.000 14:40:30 -- host/digest.sh@73 -- # killprocess 76220 00:21:22.000 14:40:30 -- common/autotest_common.sh@936 -- # '[' -z 76220 ']' 00:21:22.000 14:40:30 -- common/autotest_common.sh@940 -- # kill -0 76220 00:21:22.000 14:40:30 -- common/autotest_common.sh@941 -- # uname 00:21:22.000 14:40:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.000 14:40:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76220 00:21:22.000 14:40:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:22.000 killing process with pid 76220 00:21:22.000 14:40:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:22.000 14:40:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76220' 00:21:22.000 Received shutdown signal, test time was about 2.000000 seconds 00:21:22.000 00:21:22.000 Latency(us) 00:21:22.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.000 =================================================================================================================== 00:21:22.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.000 14:40:30 -- common/autotest_common.sh@955 -- # kill 76220 00:21:22.000 14:40:30 -- common/autotest_common.sh@960 -- # wait 76220 00:21:22.259 14:40:30 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:22.259 14:40:30 -- host/digest.sh@54 -- # local rw bs qd 00:21:22.259 14:40:30 -- host/digest.sh@56 -- # rw=randwrite 00:21:22.259 14:40:30 -- host/digest.sh@56 -- # bs=4096 00:21:22.259 14:40:30 -- host/digest.sh@56 -- # qd=128 00:21:22.259 14:40:30 -- host/digest.sh@58 -- # bperfpid=76267 00:21:22.259 14:40:30 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:22.259 14:40:30 -- host/digest.sh@60 -- # waitforlisten 76267 /var/tmp/bperf.sock 00:21:22.259 14:40:30 -- common/autotest_common.sh@817 -- # '[' -z 76267 ']' 00:21:22.259 14:40:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.259 14:40:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:22.259 14:40:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.259 14:40:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:22.259 14:40:30 -- common/autotest_common.sh@10 -- # set +x 00:21:22.259 [2024-04-17 14:40:30.820189] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:22.259 [2024-04-17 14:40:30.820275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76267 ] 00:21:22.519 [2024-04-17 14:40:30.953350] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.519 [2024-04-17 14:40:31.013639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.519 14:40:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:22.519 14:40:31 -- common/autotest_common.sh@850 -- # return 0 00:21:22.519 14:40:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:22.519 14:40:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:22.820 14:40:31 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:22.820 14:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.820 14:40:31 -- common/autotest_common.sh@10 -- # set +x 00:21:22.820 14:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.820 14:40:31 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.820 14:40:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:23.387 nvme0n1 00:21:23.387 14:40:31 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:23.387 14:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.387 14:40:31 -- common/autotest_common.sh@10 -- # set +x 00:21:23.387 14:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.387 14:40:31 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:23.387 14:40:31 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:23.387 Running I/O for 2 seconds... 00:21:23.387 [2024-04-17 14:40:31.903119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fef90 00:21:23.388 [2024-04-17 14:40:31.905877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.388 [2024-04-17 14:40:31.905927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.388 [2024-04-17 14:40:31.920321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190feb58 00:21:23.388 [2024-04-17 14:40:31.923119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.388 [2024-04-17 14:40:31.923161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:23.388 [2024-04-17 14:40:31.937613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fe2e8 00:21:23.388 [2024-04-17 14:40:31.940407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.388 [2024-04-17 14:40:31.940460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:23.388 [2024-04-17 14:40:31.954623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fda78 00:21:23.388 [2024-04-17 14:40:31.957251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.388 [2024-04-17 14:40:31.957288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:23.388 [2024-04-17 14:40:31.971786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fd208 00:21:23.388 [2024-04-17 14:40:31.974428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.388 [2024-04-17 14:40:31.974465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:23.388 [2024-04-17 14:40:31.989028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fc998 00:21:23.647 [2024-04-17 14:40:31.991697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:31.991747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.006208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fc128 00:21:23.647 [2024-04-17 14:40:32.008789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.008833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.023067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fb8b8 00:21:23.647 [2024-04-17 14:40:32.025613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.025654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.040149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fb048 00:21:23.647 [2024-04-17 14:40:32.042744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.042782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.056952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190fa7d8 00:21:23.647 [2024-04-17 14:40:32.059506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.059573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.073883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f9f68 00:21:23.647 [2024-04-17 14:40:32.076402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.076439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.090718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f96f8 00:21:23.647 [2024-04-17 14:40:32.093187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.093224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.107481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f8e88 00:21:23.647 [2024-04-17 14:40:32.109906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.109960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.124302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f8618 00:21:23.647 [2024-04-17 14:40:32.126745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.126785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.141176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f7da8 00:21:23.647 [2024-04-17 14:40:32.143640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.143679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.158233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f7538 00:21:23.647 [2024-04-17 14:40:32.160621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.160688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.175113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f6cc8 00:21:23.647 [2024-04-17 14:40:32.177501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.177540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.191535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f6458 00:21:23.647 [2024-04-17 14:40:32.193852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.193888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.208134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f5be8 00:21:23.647 [2024-04-17 14:40:32.210455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.210491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.225095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f5378 00:21:23.647 [2024-04-17 14:40:32.227395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.227443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:23.647 [2024-04-17 14:40:32.242400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f4b08 00:21:23.647 [2024-04-17 14:40:32.244730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.647 [2024-04-17 14:40:32.244769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.259596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f4298 00:21:23.907 [2024-04-17 14:40:32.261873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.261911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.276328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f3a28 00:21:23.907 [2024-04-17 14:40:32.278580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.278621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.293133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f31b8 00:21:23.907 [2024-04-17 14:40:32.295350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.295389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.309901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f2948 00:21:23.907 [2024-04-17 14:40:32.312120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.312163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.326967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f20d8 00:21:23.907 [2024-04-17 14:40:32.329169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.329222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.344257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f1868 00:21:23.907 [2024-04-17 14:40:32.346455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.346507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.361608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f0ff8 00:21:23.907 [2024-04-17 14:40:32.363793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.363844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.378965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f0788 00:21:23.907 [2024-04-17 14:40:32.381117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.381165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.396231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190eff18 00:21:23.907 [2024-04-17 14:40:32.398370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.398419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.413591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ef6a8 00:21:23.907 [2024-04-17 14:40:32.415728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.415777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.430887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190eee38 00:21:23.907 [2024-04-17 14:40:32.432950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.433004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.448253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ee5c8 00:21:23.907 [2024-04-17 14:40:32.450377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.450431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.465592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190edd58 00:21:23.907 [2024-04-17 14:40:32.467623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.467671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.482629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ed4e8 00:21:23.907 [2024-04-17 14:40:32.484642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.484691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:23.907 [2024-04-17 14:40:32.500116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ecc78 00:21:23.907 [2024-04-17 14:40:32.502128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.907 [2024-04-17 14:40:32.502173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:24.166 [2024-04-17 14:40:32.517486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ec408 00:21:24.166 [2024-04-17 14:40:32.519464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.166 [2024-04-17 14:40:32.519512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:24.166 [2024-04-17 14:40:32.534753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ebb98 00:21:24.166 [2024-04-17 14:40:32.536734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.536778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.552283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190eb328 00:21:24.167 [2024-04-17 14:40:32.554276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.554323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.569744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190eaab8 00:21:24.167 [2024-04-17 14:40:32.571692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.571743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.587394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ea248 00:21:24.167 [2024-04-17 14:40:32.589283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.589347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.604527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e99d8 00:21:24.167 [2024-04-17 14:40:32.606479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.606526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.622478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e9168 00:21:24.167 [2024-04-17 14:40:32.624363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.624423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.640446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e88f8 00:21:24.167 [2024-04-17 14:40:32.642362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.642408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.657614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e8088 00:21:24.167 [2024-04-17 14:40:32.659499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.659550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.675014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e7818 00:21:24.167 [2024-04-17 14:40:32.676801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.676850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.692363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e6fa8 00:21:24.167 [2024-04-17 14:40:32.694157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.694205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.709726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e6738 00:21:24.167 [2024-04-17 14:40:32.711470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.711516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.726996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e5ec8 00:21:24.167 [2024-04-17 14:40:32.728745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.728797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.744345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e5658 00:21:24.167 [2024-04-17 14:40:32.746070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.746120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:24.167 [2024-04-17 14:40:32.761798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e4de8 00:21:24.167 [2024-04-17 14:40:32.763490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.167 [2024-04-17 14:40:32.763542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.779131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e4578 00:21:24.426 [2024-04-17 14:40:32.780792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.780840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.796708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e3d08 00:21:24.426 [2024-04-17 14:40:32.798437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.798482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.814364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e3498 00:21:24.426 [2024-04-17 14:40:32.815984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.816028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.831349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e2c28 00:21:24.426 [2024-04-17 14:40:32.832890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.832928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.848299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e23b8 00:21:24.426 [2024-04-17 14:40:32.849827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.849864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.865484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e1b48 00:21:24.426 [2024-04-17 14:40:32.867035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.867087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.882610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e12d8 00:21:24.426 [2024-04-17 14:40:32.884157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.884194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.899379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e0a68 00:21:24.426 [2024-04-17 14:40:32.900850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.900886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.916244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e01f8 00:21:24.426 [2024-04-17 14:40:32.917672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.917710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.933154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190df988 00:21:24.426 [2024-04-17 14:40:32.934569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.934606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.949911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190df118 00:21:24.426 [2024-04-17 14:40:32.951297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.951338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.966808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190de8a8 00:21:24.426 [2024-04-17 14:40:32.968194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.968234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:32.983642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190de038 00:21:24.426 [2024-04-17 14:40:32.985027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:32.985062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:33.007939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190de038 00:21:24.426 [2024-04-17 14:40:33.010715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:33.010759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.426 [2024-04-17 14:40:33.024788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190de8a8 00:21:24.426 [2024-04-17 14:40:33.027461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.426 [2024-04-17 14:40:33.027500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.041449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190df118 00:21:24.685 [2024-04-17 14:40:33.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.685 [2024-04-17 14:40:33.044095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.058092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190df988 00:21:24.685 [2024-04-17 14:40:33.060680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.685 [2024-04-17 14:40:33.060717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.074816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e01f8 00:21:24.685 [2024-04-17 14:40:33.077412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.685 [2024-04-17 14:40:33.077455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.091502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e0a68 00:21:24.685 [2024-04-17 14:40:33.094063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.685 [2024-04-17 14:40:33.094102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.108181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e12d8 00:21:24.685 [2024-04-17 14:40:33.110718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.685 [2024-04-17 14:40:33.110771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.125025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e1b48 00:21:24.685 [2024-04-17 14:40:33.127563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.685 [2024-04-17 14:40:33.127604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.141912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e23b8 00:21:24.685 [2024-04-17 14:40:33.144412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.685 [2024-04-17 14:40:33.144455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.158671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e2c28 00:21:24.685 [2024-04-17 14:40:33.161172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.685 [2024-04-17 14:40:33.161212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:24.685 [2024-04-17 14:40:33.175556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e3498 00:21:24.686 [2024-04-17 14:40:33.178055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.686 [2024-04-17 14:40:33.178096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:24.686 [2024-04-17 14:40:33.192401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e3d08 00:21:24.686 [2024-04-17 14:40:33.194866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.686 [2024-04-17 14:40:33.194907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:24.686 [2024-04-17 14:40:33.209256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e4578 00:21:24.686 [2024-04-17 14:40:33.211707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.686 [2024-04-17 14:40:33.211755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:24.686 [2024-04-17 14:40:33.226148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e4de8 00:21:24.686 [2024-04-17 14:40:33.228543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.686 [2024-04-17 14:40:33.228591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:24.686 [2024-04-17 14:40:33.243085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e5658 00:21:24.686 [2024-04-17 14:40:33.245466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.686 [2024-04-17 14:40:33.245510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:24.686 [2024-04-17 14:40:33.260014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e5ec8 00:21:24.686 [2024-04-17 14:40:33.262412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.686 [2024-04-17 14:40:33.262456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:24.686 [2024-04-17 14:40:33.276834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e6738 00:21:24.686 [2024-04-17 14:40:33.279228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.686 [2024-04-17 14:40:33.279270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:24.944 [2024-04-17 14:40:33.293688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e6fa8 00:21:24.944 [2024-04-17 14:40:33.296007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.944 [2024-04-17 14:40:33.296047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:24.944 [2024-04-17 14:40:33.310373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e7818 00:21:24.945 [2024-04-17 14:40:33.312660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.312701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.327080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e8088 00:21:24.945 [2024-04-17 14:40:33.329339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.329378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.343701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e88f8 00:21:24.945 [2024-04-17 14:40:33.346014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.346051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.360695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e9168 00:21:24.945 [2024-04-17 14:40:33.362948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.362997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.377610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190e99d8 00:21:24.945 [2024-04-17 14:40:33.379836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.379876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.394288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ea248 00:21:24.945 [2024-04-17 14:40:33.396449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.396487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.410777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190eaab8 00:21:24.945 [2024-04-17 14:40:33.413003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.413041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.427422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190eb328 00:21:24.945 [2024-04-17 14:40:33.429554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.429591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.444168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ebb98 00:21:24.945 [2024-04-17 14:40:33.446288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.446325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.460731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ec408 00:21:24.945 [2024-04-17 14:40:33.462857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.462892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.477285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ecc78 00:21:24.945 [2024-04-17 14:40:33.479382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.479420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.493868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ed4e8 00:21:24.945 [2024-04-17 14:40:33.495969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.496014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.510603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190edd58 00:21:24.945 [2024-04-17 14:40:33.512657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.512694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.527272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ee5c8 00:21:24.945 [2024-04-17 14:40:33.529291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.529337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:24.945 [2024-04-17 14:40:33.543968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190eee38 00:21:24.945 [2024-04-17 14:40:33.545966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.945 [2024-04-17 14:40:33.546004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:25.203 [2024-04-17 14:40:33.560640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190ef6a8 00:21:25.203 [2024-04-17 14:40:33.562637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.203 [2024-04-17 14:40:33.562675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:25.203 [2024-04-17 14:40:33.577314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190eff18 00:21:25.203 [2024-04-17 14:40:33.579259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.203 [2024-04-17 14:40:33.579298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:25.203 [2024-04-17 14:40:33.593907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f0788 00:21:25.203 [2024-04-17 14:40:33.595830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.203 [2024-04-17 14:40:33.595869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:25.203 [2024-04-17 14:40:33.610561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f0ff8 00:21:25.203 [2024-04-17 14:40:33.612463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.203 [2024-04-17 14:40:33.612500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:25.203 [2024-04-17 14:40:33.627181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f1868 00:21:25.203 [2024-04-17 14:40:33.629051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.203 [2024-04-17 14:40:33.629089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:25.203 [2024-04-17 14:40:33.643795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f20d8 00:21:25.203 [2024-04-17 14:40:33.645684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.645721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.660566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f2948 00:21:25.204 [2024-04-17 14:40:33.662413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.662449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.677583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f31b8 00:21:25.204 [2024-04-17 14:40:33.679436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.679473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.694579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f3a28 00:21:25.204 [2024-04-17 14:40:33.696408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.696442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.711747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f4298 00:21:25.204 [2024-04-17 14:40:33.713625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.713663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.728810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f4b08 00:21:25.204 [2024-04-17 14:40:33.730681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.730718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.746179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f5378 00:21:25.204 [2024-04-17 14:40:33.747885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.747920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.762791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f5be8 00:21:25.204 [2024-04-17 14:40:33.764561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.764595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.779587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f6458 00:21:25.204 [2024-04-17 14:40:33.781273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.781309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:25.204 [2024-04-17 14:40:33.796888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f6cc8 00:21:25.204 [2024-04-17 14:40:33.798615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.204 [2024-04-17 14:40:33.798653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:25.463 [2024-04-17 14:40:33.814168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f7538 00:21:25.463 [2024-04-17 14:40:33.815898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.463 [2024-04-17 14:40:33.815939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:25.463 [2024-04-17 14:40:33.831162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f7da8 00:21:25.463 [2024-04-17 14:40:33.832793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.463 [2024-04-17 14:40:33.832831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:25.463 [2024-04-17 14:40:33.847942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f8618 00:21:25.463 [2024-04-17 14:40:33.849578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.463 [2024-04-17 14:40:33.849617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:25.463 [2024-04-17 14:40:33.864798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f8e88 00:21:25.463 [2024-04-17 14:40:33.866411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.463 [2024-04-17 14:40:33.866451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:25.463 [2024-04-17 14:40:33.881926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b7030) with pdu=0x2000190f96f8 00:21:25.463 [2024-04-17 14:40:33.883516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.463 [2024-04-17 14:40:33.883555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:25.463 00:21:25.463 Latency(us) 00:21:25.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.463 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:25.463 nvme0n1 : 2.00 14902.03 58.21 0.00 0.00 8581.84 2323.55 32648.84 00:21:25.463 =================================================================================================================== 00:21:25.463 Total : 14902.03 58.21 0.00 0.00 8581.84 2323.55 32648.84 00:21:25.463 0 00:21:25.463 14:40:33 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:25.463 14:40:33 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:25.463 14:40:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:25.463 14:40:33 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:25.463 | .driver_specific 00:21:25.463 | .nvme_error 00:21:25.463 | .status_code 00:21:25.463 | .command_transient_transport_error' 00:21:25.723 14:40:34 -- host/digest.sh@71 -- # (( 117 > 0 )) 00:21:25.723 14:40:34 -- host/digest.sh@73 -- # killprocess 76267 00:21:25.723 14:40:34 -- common/autotest_common.sh@936 -- # '[' -z 76267 ']' 00:21:25.723 14:40:34 -- common/autotest_common.sh@940 -- # kill -0 76267 00:21:25.723 14:40:34 -- common/autotest_common.sh@941 -- # uname 00:21:25.723 14:40:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:25.723 14:40:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76267 00:21:25.723 killing process with pid 76267 00:21:25.723 Received shutdown signal, test time was about 2.000000 seconds 00:21:25.723 00:21:25.723 Latency(us) 00:21:25.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.723 =================================================================================================================== 00:21:25.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.723 14:40:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:25.723 14:40:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:25.723 14:40:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76267' 00:21:25.723 14:40:34 -- common/autotest_common.sh@955 -- # kill 76267 00:21:25.723 14:40:34 -- common/autotest_common.sh@960 -- # wait 76267 00:21:25.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:25.981 14:40:34 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:25.981 14:40:34 -- host/digest.sh@54 -- # local rw bs qd 00:21:25.981 14:40:34 -- host/digest.sh@56 -- # rw=randwrite 00:21:25.981 14:40:34 -- host/digest.sh@56 -- # bs=131072 00:21:25.981 14:40:34 -- host/digest.sh@56 -- # qd=16 00:21:25.981 14:40:34 -- host/digest.sh@58 -- # bperfpid=76320 00:21:25.981 14:40:34 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:25.981 14:40:34 -- host/digest.sh@60 -- # waitforlisten 76320 /var/tmp/bperf.sock 00:21:25.981 14:40:34 -- common/autotest_common.sh@817 -- # '[' -z 76320 ']' 00:21:25.981 14:40:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:25.981 14:40:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:25.981 14:40:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:25.981 14:40:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:25.981 14:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:25.982 [2024-04-17 14:40:34.444727] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:25.982 [2024-04-17 14:40:34.445017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76320 ] 00:21:25.982 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:25.982 Zero copy mechanism will not be used. 00:21:25.982 [2024-04-17 14:40:34.579727] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.240 [2024-04-17 14:40:34.638988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.240 14:40:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:26.240 14:40:34 -- common/autotest_common.sh@850 -- # return 0 00:21:26.240 14:40:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:26.240 14:40:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:26.499 14:40:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:26.499 14:40:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.499 14:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:26.499 14:40:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.499 14:40:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.499 14:40:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.757 nvme0n1 00:21:26.757 14:40:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:26.757 14:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.757 14:40:35 -- common/autotest_common.sh@10 -- # set +x 00:21:26.757 14:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.757 14:40:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:26.757 14:40:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:27.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:27.017 Zero copy mechanism will not be used. 00:21:27.017 Running I/O for 2 seconds... 00:21:27.017 [2024-04-17 14:40:35.467510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.467836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.467869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.472884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.473216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.473252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.478218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.478526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.478557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.483659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.483970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.484011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.488926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.489245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.489279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.494346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.494656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.494687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.499691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.500004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.500045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.505072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.505388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.505412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.510399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.510706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.510736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.515746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.516080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.516117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.521135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.521470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.521510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.526490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.526809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.526842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.531804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.532126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.532165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.537089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.537413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.537443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.542554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.542873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.542902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.547945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.548263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.548294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.553293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.553627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.553656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.558644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.558964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.558994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.563995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.564333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.564369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.569370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.569682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.569721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.574683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.575004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.575032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.579923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.017 [2024-04-17 14:40:35.580245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.017 [2024-04-17 14:40:35.580279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.017 [2024-04-17 14:40:35.585274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.018 [2024-04-17 14:40:35.585590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.018 [2024-04-17 14:40:35.585620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.018 [2024-04-17 14:40:35.590695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.018 [2024-04-17 14:40:35.591020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.018 [2024-04-17 14:40:35.591050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.018 [2024-04-17 14:40:35.596039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.018 [2024-04-17 14:40:35.596347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.018 [2024-04-17 14:40:35.596377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.018 [2024-04-17 14:40:35.601341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.018 [2024-04-17 14:40:35.601648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.018 [2024-04-17 14:40:35.601676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.018 [2024-04-17 14:40:35.606657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.018 [2024-04-17 14:40:35.606979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.018 [2024-04-17 14:40:35.607008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.018 [2024-04-17 14:40:35.611979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.018 [2024-04-17 14:40:35.612282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.018 [2024-04-17 14:40:35.612311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.018 [2024-04-17 14:40:35.617266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.018 [2024-04-17 14:40:35.617582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.018 [2024-04-17 14:40:35.617611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.278 [2024-04-17 14:40:35.622638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.278 [2024-04-17 14:40:35.622945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.278 [2024-04-17 14:40:35.622988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.278 [2024-04-17 14:40:35.628025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.278 [2024-04-17 14:40:35.628332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.278 [2024-04-17 14:40:35.628360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.278 [2024-04-17 14:40:35.633343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.278 [2024-04-17 14:40:35.633650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.278 [2024-04-17 14:40:35.633680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.278 [2024-04-17 14:40:35.638633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.278 [2024-04-17 14:40:35.638938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.278 [2024-04-17 14:40:35.638980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.278 [2024-04-17 14:40:35.643879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.278 [2024-04-17 14:40:35.644196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.278 [2024-04-17 14:40:35.644226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.278 [2024-04-17 14:40:35.649143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.278 [2024-04-17 14:40:35.649467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.278 [2024-04-17 14:40:35.649496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.278 [2024-04-17 14:40:35.654460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.654779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.654809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.659825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.660155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.660185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.665193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.665508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.665538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.670603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.670907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.670936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.676063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.676383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.676412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.681473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.681784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.681813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.686862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.687181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.687211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.692444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.692896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.693234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.698303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.698764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.698963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.704128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.704562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.704597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.709774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.710117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.710151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.715224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.715546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.715591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.720825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.721145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.721174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.726239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.726543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.726573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.731694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.732034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.732078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.737417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.737726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.737756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.742882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.743233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.743260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.748385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.748689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.748718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.753936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.754299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.754338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.759337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.279 [2024-04-17 14:40:35.759640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.279 [2024-04-17 14:40:35.759669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.279 [2024-04-17 14:40:35.764691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.765023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.765069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.770078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.770424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.770453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.775682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.775986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.776028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.781162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.781495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.781524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.786736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.787056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.787090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.792138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.792456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.792485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.797398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.797705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.797734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.802686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.803024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.803052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.807969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.808300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.808329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.813351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.813661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.813690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.818747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.819080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.819113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.824087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.824392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.824421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.829424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.829729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.829758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.834707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.835025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.835055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.840068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.840372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.840400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.845423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.845729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.845758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.850803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.851139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.851187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.856237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.856541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.856569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.861601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.861905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.861934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.280 [2024-04-17 14:40:35.866913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.280 [2024-04-17 14:40:35.867250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.280 [2024-04-17 14:40:35.867284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.281 [2024-04-17 14:40:35.872300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.281 [2024-04-17 14:40:35.872604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.281 [2024-04-17 14:40:35.872633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.281 [2024-04-17 14:40:35.877660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.281 [2024-04-17 14:40:35.877983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.281 [2024-04-17 14:40:35.878016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.882929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.883253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.883286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.888291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.888598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.888628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.893559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.893864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.893905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.898842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.899160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.899193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.904190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.904498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.904527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.909630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.909984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.910013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.915094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.915405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.915429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.920484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.920791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.920821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.925803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.926123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.926155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.931131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.931438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.931468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.936503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.936828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.936857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.941870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.942201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.942234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.947179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.947484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.947514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.952519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.952825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.952859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.541 [2024-04-17 14:40:35.957792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.541 [2024-04-17 14:40:35.958115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.541 [2024-04-17 14:40:35.958147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:35.963093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:35.963399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:35.963429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:35.968390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:35.968698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:35.968728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:35.973763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:35.974091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:35.974122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:35.979108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:35.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:35.979444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:35.984450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:35.984768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:35.984798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:35.989832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:35.990158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:35.990194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:35.995203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:35.995514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:35.995543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.000506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.000823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.000854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.005849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.006174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.006208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.011257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.011579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.011611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.016774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.017098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.017140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.022372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.022682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.022715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.027822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.028152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.028187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.033233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.033567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.033598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.038801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.039125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.039164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.044289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.044597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.044629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.049773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.050097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.050131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.055222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.055544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.055575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.060610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.060924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.060968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.066226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.066538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.066568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.071773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.072108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.072138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.077202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.077543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.077573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.082704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.083030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.083060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.088143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.088460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.088490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.093670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.093988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.094017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.099099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.099419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.542 [2024-04-17 14:40:36.099448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.542 [2024-04-17 14:40:36.104538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.542 [2024-04-17 14:40:36.104874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.543 [2024-04-17 14:40:36.104905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.543 [2024-04-17 14:40:36.110114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.543 [2024-04-17 14:40:36.110424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.543 [2024-04-17 14:40:36.110454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.543 [2024-04-17 14:40:36.115429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.543 [2024-04-17 14:40:36.115738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.543 [2024-04-17 14:40:36.115768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.543 [2024-04-17 14:40:36.120766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.543 [2024-04-17 14:40:36.121092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.543 [2024-04-17 14:40:36.121122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.543 [2024-04-17 14:40:36.126194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.543 [2024-04-17 14:40:36.126500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.543 [2024-04-17 14:40:36.126531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.543 [2024-04-17 14:40:36.131563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.543 [2024-04-17 14:40:36.131875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.543 [2024-04-17 14:40:36.131906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.543 [2024-04-17 14:40:36.137027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.543 [2024-04-17 14:40:36.137384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.543 [2024-04-17 14:40:36.137414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.543 [2024-04-17 14:40:36.142525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.543 [2024-04-17 14:40:36.142851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.543 [2024-04-17 14:40:36.142882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.147895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.148236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.148265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.153397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.153708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.153738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.158891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.159232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.159272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.164186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.164501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.164529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.169566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.169873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.169903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.174867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.175199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.175233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.180280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.180586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.180617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.186405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.186736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.186767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.192010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.192327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.192358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.197647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.197981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.198011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.203058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.203368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.203398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.208465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.208785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.208816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.213878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.214196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.214235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.219215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.219528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.219558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.224533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.224842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.224872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.230407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.230721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.230753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.235818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.236156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.236185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.241563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.241888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.241919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.247036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.247358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.247388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.252636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.252959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.252989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.258150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.258480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.258510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.263527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.263846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.263876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.268935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.269255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.269284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.274364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.803 [2024-04-17 14:40:36.274689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.803 [2024-04-17 14:40:36.274719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.803 [2024-04-17 14:40:36.279777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.280101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.280131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.285232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.285588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.285619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.290829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.291153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.291196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.296202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.296513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.296553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.301767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.302097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.302127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.307130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.307446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.307486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.312613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.312941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.312983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.318164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.318482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.318515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.323602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.323910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.323959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.329005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.329358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.329394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.334503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.334849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.334886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.340117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.340433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.340469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.345628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.345953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.346014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.351130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.351463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.351500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.356619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.356964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.357027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.362149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.362474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.362511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.367565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.367919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.367969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.374086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.374426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.374461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.379766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.380123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.380162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.385281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.385649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.385685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.390977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.391340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.391378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.396482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.396816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.396852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.804 [2024-04-17 14:40:36.402159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:27.804 [2024-04-17 14:40:36.402506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.804 [2024-04-17 14:40:36.402542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.064 [2024-04-17 14:40:36.407809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.064 [2024-04-17 14:40:36.408166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.064 [2024-04-17 14:40:36.408201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.064 [2024-04-17 14:40:36.413379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.413711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.413747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.418744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.419098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.419140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.424169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.424498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.424535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.429560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.429896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.429933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.434977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.435313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.435353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.440439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.440777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.440814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.445599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.445686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.445716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.451098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.451191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.451222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.456549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.456637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.456666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.462032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.462119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.462148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.467413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.467501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.467547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.472784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.472873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.472902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.478238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.478326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.478355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.483601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.483692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.483724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.488942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.489060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.489090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.494332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.494419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.494449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.499735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.499824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.499855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.505168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.505288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.505318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.511025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.511150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.511181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.516565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.516676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.516706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.522101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.522191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.522221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.527543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.527671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.527717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.533153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.533295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.533327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.538600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.065 [2024-04-17 14:40:36.538697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.065 [2024-04-17 14:40:36.538728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.065 [2024-04-17 14:40:36.543885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.544020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.544052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.549770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.549901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.549932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.555477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.555600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.555632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.560972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.561090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.561122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.566448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.566537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.566573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.571784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.571873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.571904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.577175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.577309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.577355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.582726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.582823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.582853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.588238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.588347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.588377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.593655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.593764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.593795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.599503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.599601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.599635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.605033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.605128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.605160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.610501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.610594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.610628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.615765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.615887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.615917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.621306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.621435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.621467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.626878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.627054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.627084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.632408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.632507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.632537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.637904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.638057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.638089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.643451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.643550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.643580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.648772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.648876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.648906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.654258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.654356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.654387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.659647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.659765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.659795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.066 [2024-04-17 14:40:36.665383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.066 [2024-04-17 14:40:36.665483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.066 [2024-04-17 14:40:36.665514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.670902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.671053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.671084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.676507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.676603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.676633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.681958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.682093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.682123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.687442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.687543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.687589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.692811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.692904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.692936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.698105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.698206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.698236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.703547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.703646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.703677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.708971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.709078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.709109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.714421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.714527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.714558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.719808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.719910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.719941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.725235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.725376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.725407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.730708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.730806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.730837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.736128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.736227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.736259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.741641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.741754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.741791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.747045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.747167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.747200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.752481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.752602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.752648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.758038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.758130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.758160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.763492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.763613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.763643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.769134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.769243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.769274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.774795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.774901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.774931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.780208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.780320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.780358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.785695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.785801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.785832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.791238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.327 [2024-04-17 14:40:36.791345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.327 [2024-04-17 14:40:36.791376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.327 [2024-04-17 14:40:36.796646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.796761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.796791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.802270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.802364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.802394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.808081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.808177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.808247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.813563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.813656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.813687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.819118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.819210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.819235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.824528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.824614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.824638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.829942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.830069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.830092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.835395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.835473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.835498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.840880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.840973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.841029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.846308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.846396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.846421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.851712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.851785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.851810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.857277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.857378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.857402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.862723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.862811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.862834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.867988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.868106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.868130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.873282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.873392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.873417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.878663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.878756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.878778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.884171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.884262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.884287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.889315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.889411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.889435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.894577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.894659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.894682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.899899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.900019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.900065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.905387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.905461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.905486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.910714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.910789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.910814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.916079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.916162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.916186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.921480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.921565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.921590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.328 [2024-04-17 14:40:36.927188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.328 [2024-04-17 14:40:36.927261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.328 [2024-04-17 14:40:36.927285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.932877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.932978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.933003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.938399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.938481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.938504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.943542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.943617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.943641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.948854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.948981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.949011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.954461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.954540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.954564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.960062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.960149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.960174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.965517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.965614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.965640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.971032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.971118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.971143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.976424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.976513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.976538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.982027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.982113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.982138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.589 [2024-04-17 14:40:36.987569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.589 [2024-04-17 14:40:36.987663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.589 [2024-04-17 14:40:36.987688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:36.993082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:36.993165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:36.993190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:36.998553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:36.998636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:36.998659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.004129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.004216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.004240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.009565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.009661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.009685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.015050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.015136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.015161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.020482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.020564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.020587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.025905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.026028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.026053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.031357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.031454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.031478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.036920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.037022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.037046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.042377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.042460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.042484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.047682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.047775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.047799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.053148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.053242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.053266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.058569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.058663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.058687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.064075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.064160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.064184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.069539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.069630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.069654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.075078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.075172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.075197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.080723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.080826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.080850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.086157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.086243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.086270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.091595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.091685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.091709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.097055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.097141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.097165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.102436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.102537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.102562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.107963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.108056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.108080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.113384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.113481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.113504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.118864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.118968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.118995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.124213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.124316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.124340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.129712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.129798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.129824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.135245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.590 [2024-04-17 14:40:37.135327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.590 [2024-04-17 14:40:37.135352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.590 [2024-04-17 14:40:37.140788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.140870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.140895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.146268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.146353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.146377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.151648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.151730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.151755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.157192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.157274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.157298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.162694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.162783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.162806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.168359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.168450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.168474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.174003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.174083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.174108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.179456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.179545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.179599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.185056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.185136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.185160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.591 [2024-04-17 14:40:37.190496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.591 [2024-04-17 14:40:37.190582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.591 [2024-04-17 14:40:37.190607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.851 [2024-04-17 14:40:37.195983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.196076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.196100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.201487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.201579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.201603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.207090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.207190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.207215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.212525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.212616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.212640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.218100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.218211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.218236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.223717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.223808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.223831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.229369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.229468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.229493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.235108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.235207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.235231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.240737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.240825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.240849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.246233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.246318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.246343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.251596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.251698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.251721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.257254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.257360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.257385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.852 [2024-04-17 14:40:37.262684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.852 [2024-04-17 14:40:37.262767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.852 [2024-04-17 14:40:37.262791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.268190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.268296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.268321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.273711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.273798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.273823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.279217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.279315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.279344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.284791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.284883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.284910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.290421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.290530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.290574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.296064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.296169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.296199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.301713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.301836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.301865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.307102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.307241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.307279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.312544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.312646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.312674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.318126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.318236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.318274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.323638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.323759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.323789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.329196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.329310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.329372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.334897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.335040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.335072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.340537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.340651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.340680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.346144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.346248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.346277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.351646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.351746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.351776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.357127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.357255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.357288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.362702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.362814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.362844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.368116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.368210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.368239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.374020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.374137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.374168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.379624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.379731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.379762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.385284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.385407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.385441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.391048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.391155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.391208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.396741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.396868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.396902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.402339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.402460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.402493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.408043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.408159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.408194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.413706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.853 [2024-04-17 14:40:37.413832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.853 [2024-04-17 14:40:37.413863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.853 [2024-04-17 14:40:37.419343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.854 [2024-04-17 14:40:37.419463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.854 [2024-04-17 14:40:37.419495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.854 [2024-04-17 14:40:37.424902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.854 [2024-04-17 14:40:37.425042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.854 [2024-04-17 14:40:37.425074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.854 [2024-04-17 14:40:37.430433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.854 [2024-04-17 14:40:37.430545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.854 [2024-04-17 14:40:37.430577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.854 [2024-04-17 14:40:37.435992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.854 [2024-04-17 14:40:37.436113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.854 [2024-04-17 14:40:37.436144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.854 [2024-04-17 14:40:37.441505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.854 [2024-04-17 14:40:37.441619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.854 [2024-04-17 14:40:37.441650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.854 [2024-04-17 14:40:37.447145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.854 [2024-04-17 14:40:37.447271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.854 [2024-04-17 14:40:37.447301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.854 [2024-04-17 14:40:37.452722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13b5d00) with pdu=0x2000190fef90 00:21:28.854 [2024-04-17 14:40:37.452837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.854 [2024-04-17 14:40:37.452868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.113 00:21:29.113 Latency(us) 00:21:29.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.113 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:29.113 nvme0n1 : 2.00 5649.03 706.13 0.00 0.00 2826.04 2010.76 10247.45 00:21:29.113 =================================================================================================================== 00:21:29.113 Total : 5649.03 706.13 0.00 0.00 2826.04 2010.76 10247.45 00:21:29.113 0 00:21:29.113 14:40:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:29.113 14:40:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:29.113 14:40:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:29.113 14:40:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:29.113 | .driver_specific 00:21:29.113 | .nvme_error 00:21:29.113 | .status_code 00:21:29.113 | .command_transient_transport_error' 00:21:29.372 14:40:37 -- host/digest.sh@71 -- # (( 364 > 0 )) 00:21:29.372 14:40:37 -- host/digest.sh@73 -- # killprocess 76320 00:21:29.372 14:40:37 -- common/autotest_common.sh@936 -- # '[' -z 76320 ']' 00:21:29.372 14:40:37 -- common/autotest_common.sh@940 -- # kill -0 76320 00:21:29.372 14:40:37 -- common/autotest_common.sh@941 -- # uname 00:21:29.372 14:40:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:29.372 14:40:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76320 00:21:29.372 killing process with pid 76320 00:21:29.372 Received shutdown signal, test time was about 2.000000 seconds 00:21:29.372 00:21:29.372 Latency(us) 00:21:29.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.372 =================================================================================================================== 00:21:29.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.372 14:40:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:29.372 14:40:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:29.372 14:40:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76320' 00:21:29.372 14:40:37 -- common/autotest_common.sh@955 -- # kill 76320 00:21:29.372 14:40:37 -- common/autotest_common.sh@960 -- # wait 76320 00:21:29.631 14:40:37 -- host/digest.sh@116 -- # killprocess 76121 00:21:29.631 14:40:37 -- common/autotest_common.sh@936 -- # '[' -z 76121 ']' 00:21:29.631 14:40:37 -- common/autotest_common.sh@940 -- # kill -0 76121 00:21:29.631 14:40:37 -- common/autotest_common.sh@941 -- # uname 00:21:29.631 14:40:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:29.631 14:40:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76121 00:21:29.631 killing process with pid 76121 00:21:29.631 14:40:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:29.631 14:40:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:29.631 14:40:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76121' 00:21:29.631 14:40:38 -- common/autotest_common.sh@955 -- # kill 76121 00:21:29.631 14:40:38 -- common/autotest_common.sh@960 -- # wait 76121 00:21:29.631 ************************************ 00:21:29.631 END TEST nvmf_digest_error 00:21:29.631 ************************************ 00:21:29.631 00:21:29.631 real 0m16.589s 00:21:29.631 user 0m32.213s 00:21:29.631 sys 0m4.314s 00:21:29.631 14:40:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:29.631 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.891 14:40:38 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:29.891 14:40:38 -- host/digest.sh@150 -- # nvmftestfini 00:21:29.891 14:40:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:29.891 14:40:38 -- nvmf/common.sh@117 -- # sync 00:21:29.891 14:40:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.891 14:40:38 -- nvmf/common.sh@120 -- # set +e 00:21:29.891 14:40:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.891 14:40:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:29.891 rmmod nvme_tcp 00:21:29.891 rmmod nvme_fabrics 00:21:29.891 rmmod nvme_keyring 00:21:29.891 14:40:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:29.891 14:40:38 -- nvmf/common.sh@124 -- # set -e 00:21:29.891 14:40:38 -- nvmf/common.sh@125 -- # return 0 00:21:29.891 14:40:38 -- nvmf/common.sh@478 -- # '[' -n 76121 ']' 00:21:29.891 14:40:38 -- nvmf/common.sh@479 -- # killprocess 76121 00:21:29.891 14:40:38 -- common/autotest_common.sh@936 -- # '[' -z 76121 ']' 00:21:29.891 14:40:38 -- common/autotest_common.sh@940 -- # kill -0 76121 00:21:29.891 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76121) - No such process 00:21:29.891 Process with pid 76121 is not found 00:21:29.891 14:40:38 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76121 is not found' 00:21:29.891 14:40:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:29.891 14:40:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:29.891 14:40:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:29.891 14:40:38 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.891 14:40:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.891 14:40:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.891 14:40:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.891 14:40:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.891 14:40:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:29.891 ************************************ 00:21:29.891 END TEST nvmf_digest 00:21:29.891 ************************************ 00:21:29.891 00:21:29.891 real 0m35.525s 00:21:29.891 user 1m8.045s 00:21:29.891 sys 0m9.078s 00:21:29.891 14:40:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:29.891 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.891 14:40:38 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:21:29.891 14:40:38 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:21:29.891 14:40:38 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:29.891 14:40:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:29.891 14:40:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:29.891 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:30.151 ************************************ 00:21:30.151 START TEST nvmf_multipath 00:21:30.151 ************************************ 00:21:30.151 14:40:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:30.151 * Looking for test storage... 00:21:30.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:30.151 14:40:38 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:30.151 14:40:38 -- nvmf/common.sh@7 -- # uname -s 00:21:30.151 14:40:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.151 14:40:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.151 14:40:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.151 14:40:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.151 14:40:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.151 14:40:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.151 14:40:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.151 14:40:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.151 14:40:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.151 14:40:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.151 14:40:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:21:30.151 14:40:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:21:30.151 14:40:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.151 14:40:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.151 14:40:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:30.151 14:40:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.151 14:40:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:30.151 14:40:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.151 14:40:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.151 14:40:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.151 14:40:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.151 14:40:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.151 14:40:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.151 14:40:38 -- paths/export.sh@5 -- # export PATH 00:21:30.151 14:40:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.151 14:40:38 -- nvmf/common.sh@47 -- # : 0 00:21:30.151 14:40:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.151 14:40:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.151 14:40:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.151 14:40:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.151 14:40:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.151 14:40:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.151 14:40:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.151 14:40:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.151 14:40:38 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:30.151 14:40:38 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:30.151 14:40:38 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:30.151 14:40:38 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:30.151 14:40:38 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.151 14:40:38 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:30.151 14:40:38 -- host/multipath.sh@30 -- # nvmftestinit 00:21:30.151 14:40:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:30.151 14:40:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.151 14:40:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:30.151 14:40:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:30.151 14:40:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:30.151 14:40:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.151 14:40:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.151 14:40:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.151 14:40:38 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:30.151 14:40:38 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:30.151 14:40:38 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:30.151 14:40:38 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:30.151 14:40:38 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:30.151 14:40:38 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:30.151 14:40:38 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.151 14:40:38 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.151 14:40:38 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:30.151 14:40:38 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:30.151 14:40:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:30.151 14:40:38 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:30.151 14:40:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:30.151 14:40:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.151 14:40:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:30.151 14:40:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:30.151 14:40:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:30.151 14:40:38 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:30.151 14:40:38 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:30.151 14:40:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:30.151 Cannot find device "nvmf_tgt_br" 00:21:30.151 14:40:38 -- nvmf/common.sh@155 -- # true 00:21:30.151 14:40:38 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:30.151 Cannot find device "nvmf_tgt_br2" 00:21:30.151 14:40:38 -- nvmf/common.sh@156 -- # true 00:21:30.151 14:40:38 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:30.151 14:40:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:30.151 Cannot find device "nvmf_tgt_br" 00:21:30.151 14:40:38 -- nvmf/common.sh@158 -- # true 00:21:30.151 14:40:38 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:30.151 Cannot find device "nvmf_tgt_br2" 00:21:30.151 14:40:38 -- nvmf/common.sh@159 -- # true 00:21:30.151 14:40:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:30.151 14:40:38 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:30.410 14:40:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:30.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:30.410 14:40:38 -- nvmf/common.sh@162 -- # true 00:21:30.410 14:40:38 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:30.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:30.410 14:40:38 -- nvmf/common.sh@163 -- # true 00:21:30.410 14:40:38 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:30.410 14:40:38 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:30.410 14:40:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:30.410 14:40:38 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:30.410 14:40:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:30.410 14:40:38 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:30.410 14:40:38 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:30.410 14:40:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:30.410 14:40:38 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:30.410 14:40:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:30.410 14:40:38 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:30.410 14:40:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:30.410 14:40:38 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:30.410 14:40:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:30.410 14:40:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:30.410 14:40:38 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:30.410 14:40:38 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:30.410 14:40:38 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:30.410 14:40:38 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:30.410 14:40:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:30.410 14:40:38 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:30.410 14:40:38 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:30.410 14:40:38 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:30.410 14:40:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:30.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:21:30.410 00:21:30.410 --- 10.0.0.2 ping statistics --- 00:21:30.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.410 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:30.411 14:40:38 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:30.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:30.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:21:30.411 00:21:30.411 --- 10.0.0.3 ping statistics --- 00:21:30.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.411 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:30.411 14:40:38 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:30.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:30.411 00:21:30.411 --- 10.0.0.1 ping statistics --- 00:21:30.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.411 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:30.411 14:40:38 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.411 14:40:38 -- nvmf/common.sh@422 -- # return 0 00:21:30.411 14:40:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:30.411 14:40:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.411 14:40:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:30.411 14:40:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:30.411 14:40:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.411 14:40:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:30.411 14:40:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:30.411 14:40:38 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:30.411 14:40:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:30.411 14:40:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:30.411 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:30.411 14:40:38 -- nvmf/common.sh@470 -- # nvmfpid=76584 00:21:30.411 14:40:38 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:30.411 14:40:38 -- nvmf/common.sh@471 -- # waitforlisten 76584 00:21:30.411 14:40:38 -- common/autotest_common.sh@817 -- # '[' -z 76584 ']' 00:21:30.411 14:40:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.411 14:40:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:30.411 14:40:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.411 14:40:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:30.411 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:30.670 [2024-04-17 14:40:39.051710] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:30.670 [2024-04-17 14:40:39.051824] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.670 [2024-04-17 14:40:39.192739] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:30.670 [2024-04-17 14:40:39.255236] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.670 [2024-04-17 14:40:39.255474] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.670 [2024-04-17 14:40:39.255640] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.670 [2024-04-17 14:40:39.255715] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.670 [2024-04-17 14:40:39.255819] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.670 [2024-04-17 14:40:39.257996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.670 [2024-04-17 14:40:39.258012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.937 14:40:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:30.937 14:40:39 -- common/autotest_common.sh@850 -- # return 0 00:21:30.937 14:40:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:30.937 14:40:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:30.937 14:40:39 -- common/autotest_common.sh@10 -- # set +x 00:21:30.937 14:40:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.937 14:40:39 -- host/multipath.sh@33 -- # nvmfapp_pid=76584 00:21:30.937 14:40:39 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:31.213 [2024-04-17 14:40:39.645663] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.213 14:40:39 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:31.472 Malloc0 00:21:31.472 14:40:39 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:31.733 14:40:40 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.992 14:40:40 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.251 [2024-04-17 14:40:40.760990] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.251 14:40:40 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:32.509 [2024-04-17 14:40:40.993151] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:32.510 14:40:41 -- host/multipath.sh@44 -- # bdevperf_pid=76632 00:21:32.510 14:40:41 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:32.510 14:40:41 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:32.510 14:40:41 -- host/multipath.sh@47 -- # waitforlisten 76632 /var/tmp/bdevperf.sock 00:21:32.510 14:40:41 -- common/autotest_common.sh@817 -- # '[' -z 76632 ']' 00:21:32.510 14:40:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.510 14:40:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:32.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.510 14:40:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.510 14:40:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:32.510 14:40:41 -- common/autotest_common.sh@10 -- # set +x 00:21:33.446 14:40:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:33.446 14:40:42 -- common/autotest_common.sh@850 -- # return 0 00:21:33.446 14:40:42 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:33.704 14:40:42 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:34.271 Nvme0n1 00:21:34.271 14:40:42 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:34.529 Nvme0n1 00:21:34.529 14:40:42 -- host/multipath.sh@78 -- # sleep 1 00:21:34.529 14:40:42 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:35.464 14:40:43 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:35.464 14:40:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:35.723 14:40:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:35.982 14:40:44 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:35.982 14:40:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:35.982 14:40:44 -- host/multipath.sh@65 -- # dtrace_pid=76677 00:21:35.982 14:40:44 -- host/multipath.sh@66 -- # sleep 6 00:21:42.548 14:40:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:42.548 14:40:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:42.548 14:40:50 -- host/multipath.sh@67 -- # active_port=4421 00:21:42.548 14:40:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:42.548 Attaching 4 probes... 00:21:42.548 @path[10.0.0.2, 4421]: 16406 00:21:42.548 @path[10.0.0.2, 4421]: 16893 00:21:42.548 @path[10.0.0.2, 4421]: 16842 00:21:42.548 @path[10.0.0.2, 4421]: 16985 00:21:42.548 @path[10.0.0.2, 4421]: 16999 00:21:42.548 14:40:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:42.548 14:40:50 -- host/multipath.sh@69 -- # sed -n 1p 00:21:42.548 14:40:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:42.548 14:40:50 -- host/multipath.sh@69 -- # port=4421 00:21:42.548 14:40:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:42.548 14:40:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:42.548 14:40:50 -- host/multipath.sh@72 -- # kill 76677 00:21:42.548 14:40:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:42.548 14:40:50 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:42.548 14:40:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:42.548 14:40:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:43.116 14:40:51 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:43.116 14:40:51 -- host/multipath.sh@65 -- # dtrace_pid=76795 00:21:43.116 14:40:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:43.116 14:40:51 -- host/multipath.sh@66 -- # sleep 6 00:21:49.681 14:40:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:49.681 14:40:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:49.681 14:40:57 -- host/multipath.sh@67 -- # active_port=4420 00:21:49.681 14:40:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:49.681 Attaching 4 probes... 00:21:49.681 @path[10.0.0.2, 4420]: 16828 00:21:49.681 @path[10.0.0.2, 4420]: 16971 00:21:49.681 @path[10.0.0.2, 4420]: 16834 00:21:49.681 @path[10.0.0.2, 4420]: 16962 00:21:49.681 @path[10.0.0.2, 4420]: 16476 00:21:49.681 14:40:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:49.681 14:40:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:49.681 14:40:57 -- host/multipath.sh@69 -- # sed -n 1p 00:21:49.681 14:40:57 -- host/multipath.sh@69 -- # port=4420 00:21:49.681 14:40:57 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:49.681 14:40:57 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:49.681 14:40:57 -- host/multipath.sh@72 -- # kill 76795 00:21:49.681 14:40:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:49.681 14:40:57 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:49.681 14:40:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:49.681 14:40:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:49.940 14:40:58 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:49.940 14:40:58 -- host/multipath.sh@65 -- # dtrace_pid=76907 00:21:49.940 14:40:58 -- host/multipath.sh@66 -- # sleep 6 00:21:49.940 14:40:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:56.535 14:41:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:56.535 14:41:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:56.535 14:41:04 -- host/multipath.sh@67 -- # active_port=4421 00:21:56.535 14:41:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.535 Attaching 4 probes... 00:21:56.535 @path[10.0.0.2, 4421]: 13256 00:21:56.535 @path[10.0.0.2, 4421]: 16455 00:21:56.535 @path[10.0.0.2, 4421]: 16718 00:21:56.535 @path[10.0.0.2, 4421]: 15901 00:21:56.535 @path[10.0.0.2, 4421]: 15363 00:21:56.535 14:41:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:56.535 14:41:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:56.535 14:41:04 -- host/multipath.sh@69 -- # sed -n 1p 00:21:56.535 14:41:04 -- host/multipath.sh@69 -- # port=4421 00:21:56.535 14:41:04 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:56.535 14:41:04 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:56.535 14:41:04 -- host/multipath.sh@72 -- # kill 76907 00:21:56.535 14:41:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.535 14:41:04 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:56.535 14:41:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:56.535 14:41:04 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:56.793 14:41:05 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:56.793 14:41:05 -- host/multipath.sh@65 -- # dtrace_pid=77025 00:21:56.793 14:41:05 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:56.793 14:41:05 -- host/multipath.sh@66 -- # sleep 6 00:22:03.347 14:41:11 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:03.347 14:41:11 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:03.347 14:41:11 -- host/multipath.sh@67 -- # active_port= 00:22:03.347 14:41:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:03.347 Attaching 4 probes... 00:22:03.347 00:22:03.347 00:22:03.347 00:22:03.347 00:22:03.347 00:22:03.347 14:41:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:03.347 14:41:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:03.347 14:41:11 -- host/multipath.sh@69 -- # sed -n 1p 00:22:03.347 14:41:11 -- host/multipath.sh@69 -- # port= 00:22:03.347 14:41:11 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:03.347 14:41:11 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:03.347 14:41:11 -- host/multipath.sh@72 -- # kill 77025 00:22:03.347 14:41:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:03.347 14:41:11 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:03.348 14:41:11 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:03.348 14:41:11 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:03.996 14:41:12 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:03.996 14:41:12 -- host/multipath.sh@65 -- # dtrace_pid=77142 00:22:03.996 14:41:12 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:03.996 14:41:12 -- host/multipath.sh@66 -- # sleep 6 00:22:10.603 14:41:18 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:10.603 14:41:18 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:10.603 14:41:18 -- host/multipath.sh@67 -- # active_port=4421 00:22:10.603 14:41:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:10.603 Attaching 4 probes... 00:22:10.603 @path[10.0.0.2, 4421]: 14347 00:22:10.603 @path[10.0.0.2, 4421]: 15790 00:22:10.603 @path[10.0.0.2, 4421]: 16008 00:22:10.603 @path[10.0.0.2, 4421]: 16173 00:22:10.603 @path[10.0.0.2, 4421]: 15272 00:22:10.603 14:41:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:10.603 14:41:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:10.603 14:41:18 -- host/multipath.sh@69 -- # sed -n 1p 00:22:10.603 14:41:18 -- host/multipath.sh@69 -- # port=4421 00:22:10.603 14:41:18 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:10.603 14:41:18 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:10.603 14:41:18 -- host/multipath.sh@72 -- # kill 77142 00:22:10.603 14:41:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:10.603 14:41:18 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:10.603 [2024-04-17 14:41:18.869176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 [2024-04-17 14:41:18.869372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ecc60 is same with the state(5) to be set 00:22:10.603 14:41:18 -- host/multipath.sh@101 -- # sleep 1 00:22:11.558 14:41:19 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:11.558 14:41:19 -- host/multipath.sh@65 -- # dtrace_pid=77267 00:22:11.558 14:41:19 -- host/multipath.sh@66 -- # sleep 6 00:22:11.558 14:41:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:18.135 14:41:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:18.135 14:41:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:18.135 14:41:26 -- host/multipath.sh@67 -- # active_port=4420 00:22:18.135 14:41:26 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.135 Attaching 4 probes... 00:22:18.135 @path[10.0.0.2, 4420]: 15602 00:22:18.135 @path[10.0.0.2, 4420]: 15828 00:22:18.135 @path[10.0.0.2, 4420]: 15928 00:22:18.135 @path[10.0.0.2, 4420]: 15849 00:22:18.135 @path[10.0.0.2, 4420]: 15774 00:22:18.135 14:41:26 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:18.135 14:41:26 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:18.135 14:41:26 -- host/multipath.sh@69 -- # sed -n 1p 00:22:18.135 14:41:26 -- host/multipath.sh@69 -- # port=4420 00:22:18.135 14:41:26 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.135 14:41:26 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.135 14:41:26 -- host/multipath.sh@72 -- # kill 77267 00:22:18.135 14:41:26 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.135 14:41:26 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.135 [2024-04-17 14:41:26.521163] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.135 14:41:26 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:18.394 14:41:26 -- host/multipath.sh@111 -- # sleep 6 00:22:24.954 14:41:32 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:24.954 14:41:32 -- host/multipath.sh@65 -- # dtrace_pid=77447 00:22:24.954 14:41:32 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76584 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:24.954 14:41:32 -- host/multipath.sh@66 -- # sleep 6 00:22:31.526 14:41:38 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:31.526 14:41:38 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:31.526 14:41:39 -- host/multipath.sh@67 -- # active_port=4421 00:22:31.526 14:41:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.526 Attaching 4 probes... 00:22:31.526 @path[10.0.0.2, 4421]: 16215 00:22:31.526 @path[10.0.0.2, 4421]: 16528 00:22:31.526 @path[10.0.0.2, 4421]: 16055 00:22:31.526 @path[10.0.0.2, 4421]: 16031 00:22:31.526 @path[10.0.0.2, 4421]: 16484 00:22:31.526 14:41:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:31.526 14:41:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:31.526 14:41:39 -- host/multipath.sh@69 -- # sed -n 1p 00:22:31.526 14:41:39 -- host/multipath.sh@69 -- # port=4421 00:22:31.526 14:41:39 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:31.526 14:41:39 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:31.526 14:41:39 -- host/multipath.sh@72 -- # kill 77447 00:22:31.526 14:41:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.526 14:41:39 -- host/multipath.sh@114 -- # killprocess 76632 00:22:31.526 14:41:39 -- common/autotest_common.sh@936 -- # '[' -z 76632 ']' 00:22:31.526 14:41:39 -- common/autotest_common.sh@940 -- # kill -0 76632 00:22:31.526 14:41:39 -- common/autotest_common.sh@941 -- # uname 00:22:31.526 14:41:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.526 14:41:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76632 00:22:31.526 killing process with pid 76632 00:22:31.526 14:41:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:31.526 14:41:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:31.526 14:41:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76632' 00:22:31.526 14:41:39 -- common/autotest_common.sh@955 -- # kill 76632 00:22:31.526 14:41:39 -- common/autotest_common.sh@960 -- # wait 76632 00:22:31.526 Connection closed with partial response: 00:22:31.526 00:22:31.526 00:22:31.526 14:41:39 -- host/multipath.sh@116 -- # wait 76632 00:22:31.526 14:41:39 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:31.526 [2024-04-17 14:40:41.067918] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:22:31.526 [2024-04-17 14:40:41.068058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76632 ] 00:22:31.526 [2024-04-17 14:40:41.207621] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.526 [2024-04-17 14:40:41.276077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.526 Running I/O for 90 seconds... 00:22:31.526 [2024-04-17 14:40:51.388860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.526 [2024-04-17 14:40:51.388935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.526 [2024-04-17 14:40:51.389012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.526 [2024-04-17 14:40:51.389047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:31.526 [2024-04-17 14:40:51.389072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.526 [2024-04-17 14:40:51.389088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:31.526 [2024-04-17 14:40:51.389111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.526 [2024-04-17 14:40:51.389127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:31.526 [2024-04-17 14:40:51.389149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.526 [2024-04-17 14:40:51.389164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:31.526 [2024-04-17 14:40:51.389186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.526 [2024-04-17 14:40:51.389202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:31.526 [2024-04-17 14:40:51.389224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.526 [2024-04-17 14:40:51.389240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:31.526 [2024-04-17 14:40:51.389261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.389277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.389315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.389352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.389430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.389476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.389516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.389554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.389593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.389631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.389926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.389969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.389989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.390030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.390068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.390108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.390146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.390184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.390235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.390837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.527 [2024-04-17 14:40:51.390853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.391352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.391381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.391409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.391427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:31.527 [2024-04-17 14:40:51.391454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.527 [2024-04-17 14:40:51.391470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.391942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.391990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.392007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.528 [2024-04-17 14:40:51.392658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.392723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.392762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.392803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.392842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.528 [2024-04-17 14:40:51.392880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.528 [2024-04-17 14:40:51.392902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.392922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.392959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.392979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.393338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.393973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.393997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.529 [2024-04-17 14:40:51.394013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.394040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.394057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.394080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.394096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.394120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.394137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.394159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.394175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.394197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.394221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.394244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.394263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.394285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.529 [2024-04-17 14:40:51.394302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.529 [2024-04-17 14:40:51.394323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:51.394339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:51.394962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.530 [2024-04-17 14:40:51.394981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.009556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.009651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.009746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.009782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.009821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.009847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.009883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.009907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.009941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.009985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.010920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.530 [2024-04-17 14:40:58.010945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.530 [2024-04-17 14:40:58.011000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.531 [2024-04-17 14:40:58.011277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.531 [2024-04-17 14:40:58.011339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.531 [2024-04-17 14:40:58.011390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.531 [2024-04-17 14:40:58.011448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.531 [2024-04-17 14:40:58.011507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.531 [2024-04-17 14:40:58.011574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.531 [2024-04-17 14:40:58.011625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.531 [2024-04-17 14:40:58.011682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.011968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.011987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.012011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.012027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.012061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.012086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.012111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.012127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.012160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.012176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.012198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.012214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.012236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.012252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.012286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.531 [2024-04-17 14:40:58.012309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:31.531 [2024-04-17 14:40:58.012333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.012358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.012396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.012962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.012981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.013021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.013058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.532 [2024-04-17 14:40:58.013734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.013771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.013810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.532 [2024-04-17 14:40:58.013848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.532 [2024-04-17 14:40:58.013870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.013886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.013908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.013924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.013946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.013976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.013999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.014507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.014547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.014585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.014623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.014660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.014698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.014736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.014787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.014980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.014998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.015020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.015036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.015058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.015074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.015896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.533 [2024-04-17 14:40:58.015926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.015976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.015996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.016027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.016045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.016075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.016092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.016134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.016153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.016182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.016199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.016228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.016245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:31.533 [2024-04-17 14:40:58.016274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.533 [2024-04-17 14:40:58.016290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:40:58.016714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:40:58.016740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.235739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.235819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.235885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.235909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.235934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.235965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.235991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.534 [2024-04-17 14:41:05.236842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.236964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.236983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.237006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.534 [2024-04-17 14:41:05.237022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:31.534 [2024-04-17 14:41:05.237045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.237486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.237938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.237981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.535 [2024-04-17 14:41:05.238789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.238837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.238879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:31.535 [2024-04-17 14:41:05.238904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.535 [2024-04-17 14:41:05.238920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.238959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.238978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.536 [2024-04-17 14:41:05.239656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.239965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.239991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.240008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.240033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.240050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.240074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.240090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.240115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.240131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:31.536 [2024-04-17 14:41:05.240156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.536 [2024-04-17 14:41:05.240173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.537 [2024-04-17 14:41:05.240676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.240729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.240771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.240812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.240853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.240893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.240945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.240984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.537 [2024-04-17 14:41:05.241491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.537 [2024-04-17 14:41:05.241521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:05.241547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:05.241564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:05.241588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:05.241605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:05.241630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:05.241646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:05.241671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:05.241688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:05.241713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:05.241729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.869652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.869698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.869730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.869760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.869790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.869820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.869874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.869904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.869934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.869983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.869999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.870193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.870223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.870252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.870293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.870324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.870354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.870384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.538 [2024-04-17 14:41:18.870414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.538 [2024-04-17 14:41:18.870594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.538 [2024-04-17 14:41:18.870609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.870976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.870992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.539 [2024-04-17 14:41:18.871656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.539 [2024-04-17 14:41:18.871732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.539 [2024-04-17 14:41:18.871747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.871763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.871777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.871792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.871807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.871829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.871844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.871860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.871874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.871889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.871904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.871919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.871934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.871960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.871976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.871992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.540 [2024-04-17 14:41:18.872660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.540 [2024-04-17 14:41:18.872975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.540 [2024-04-17 14:41:18.872991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.541 [2024-04-17 14:41:18.873191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.541 [2024-04-17 14:41:18.873220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.541 [2024-04-17 14:41:18.873250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.541 [2024-04-17 14:41:18.873279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.541 [2024-04-17 14:41:18.873309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.541 [2024-04-17 14:41:18.873338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.541 [2024-04-17 14:41:18.873370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.541 [2024-04-17 14:41:18.873406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.541 [2024-04-17 14:41:18.873640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bb6d0 is same with the state(5) to be set 00:22:31.541 [2024-04-17 14:41:18.873673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.541 [2024-04-17 14:41:18.873683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.541 [2024-04-17 14:41:18.873694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108056 len:8 PRP1 0x0 PRP2 0x0 00:22:31.541 [2024-04-17 14:41:18.873707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.541 [2024-04-17 14:41:18.873759] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15bb6d0 was disconnected and freed. reset controller. 00:22:31.541 [2024-04-17 14:41:18.874916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:31.541 [2024-04-17 14:41:18.875023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c1a20 (9): Bad file descriptor 00:22:31.541 [2024-04-17 14:41:18.875439] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.541 [2024-04-17 14:41:18.875523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.541 [2024-04-17 14:41:18.875578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.541 [2024-04-17 14:41:18.875601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c1a20 with addr=10.0.0.2, port=4421 00:22:31.541 [2024-04-17 14:41:18.875631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c1a20 is same with the state(5) to be set 00:22:31.541 [2024-04-17 14:41:18.875666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c1a20 (9): Bad file descriptor 00:22:31.541 [2024-04-17 14:41:18.875698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.542 [2024-04-17 14:41:18.875717] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:31.542 [2024-04-17 14:41:18.875732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.542 [2024-04-17 14:41:18.875765] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.542 [2024-04-17 14:41:18.875782] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:31.542 [2024-04-17 14:41:28.933839] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:31.542 Received shutdown signal, test time was about 56.134721 seconds 00:22:31.542 00:22:31.542 Latency(us) 00:22:31.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.542 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.542 Verification LBA range: start 0x0 length 0x4000 00:22:31.542 Nvme0n1 : 56.13 6943.21 27.12 0.00 0.00 18405.82 867.61 7046430.72 00:22:31.542 =================================================================================================================== 00:22:31.542 Total : 6943.21 27.12 0.00 0.00 18405.82 867.61 7046430.72 00:22:31.542 14:41:39 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.542 14:41:39 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:31.542 14:41:39 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:31.542 14:41:39 -- host/multipath.sh@125 -- # nvmftestfini 00:22:31.542 14:41:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:31.542 14:41:39 -- nvmf/common.sh@117 -- # sync 00:22:31.542 14:41:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.542 14:41:39 -- nvmf/common.sh@120 -- # set +e 00:22:31.542 14:41:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.542 14:41:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.542 rmmod nvme_tcp 00:22:31.542 rmmod nvme_fabrics 00:22:31.542 rmmod nvme_keyring 00:22:31.542 14:41:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.542 14:41:39 -- nvmf/common.sh@124 -- # set -e 00:22:31.542 14:41:39 -- nvmf/common.sh@125 -- # return 0 00:22:31.542 14:41:39 -- nvmf/common.sh@478 -- # '[' -n 76584 ']' 00:22:31.542 14:41:39 -- nvmf/common.sh@479 -- # killprocess 76584 00:22:31.542 14:41:39 -- common/autotest_common.sh@936 -- # '[' -z 76584 ']' 00:22:31.542 14:41:39 -- common/autotest_common.sh@940 -- # kill -0 76584 00:22:31.542 14:41:39 -- common/autotest_common.sh@941 -- # uname 00:22:31.542 14:41:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.542 14:41:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76584 00:22:31.542 14:41:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:31.542 killing process with pid 76584 00:22:31.542 14:41:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:31.542 14:41:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76584' 00:22:31.542 14:41:39 -- common/autotest_common.sh@955 -- # kill 76584 00:22:31.542 14:41:39 -- common/autotest_common.sh@960 -- # wait 76584 00:22:31.542 14:41:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:31.542 14:41:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:31.542 14:41:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:31.542 14:41:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.542 14:41:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.542 14:41:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.542 14:41:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.542 14:41:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.542 14:41:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:31.801 ************************************ 00:22:31.801 END TEST nvmf_multipath 00:22:31.801 ************************************ 00:22:31.801 00:22:31.801 real 1m1.616s 00:22:31.801 user 2m51.682s 00:22:31.801 sys 0m18.888s 00:22:31.801 14:41:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:31.801 14:41:40 -- common/autotest_common.sh@10 -- # set +x 00:22:31.801 14:41:40 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:31.801 14:41:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:31.801 14:41:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:31.801 14:41:40 -- common/autotest_common.sh@10 -- # set +x 00:22:31.801 ************************************ 00:22:31.801 START TEST nvmf_timeout 00:22:31.801 ************************************ 00:22:31.801 14:41:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:31.801 * Looking for test storage... 00:22:31.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:31.801 14:41:40 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:31.801 14:41:40 -- nvmf/common.sh@7 -- # uname -s 00:22:31.801 14:41:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.801 14:41:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.801 14:41:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.801 14:41:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.801 14:41:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.801 14:41:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.801 14:41:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.801 14:41:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.801 14:41:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.801 14:41:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.801 14:41:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:22:31.801 14:41:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:22:31.801 14:41:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.801 14:41:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.801 14:41:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:31.801 14:41:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.801 14:41:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:31.801 14:41:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.801 14:41:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.801 14:41:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.801 14:41:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.801 14:41:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.801 14:41:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.801 14:41:40 -- paths/export.sh@5 -- # export PATH 00:22:31.801 14:41:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.801 14:41:40 -- nvmf/common.sh@47 -- # : 0 00:22:31.801 14:41:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.801 14:41:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.801 14:41:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.801 14:41:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.801 14:41:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.801 14:41:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.801 14:41:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.801 14:41:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.801 14:41:40 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.801 14:41:40 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.801 14:41:40 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:31.801 14:41:40 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:31.801 14:41:40 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.801 14:41:40 -- host/timeout.sh@19 -- # nvmftestinit 00:22:31.801 14:41:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:31.801 14:41:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.801 14:41:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:31.801 14:41:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:31.801 14:41:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:31.801 14:41:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.801 14:41:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.801 14:41:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.801 14:41:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:31.801 14:41:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:31.801 14:41:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:31.801 14:41:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:31.801 14:41:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:31.801 14:41:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:31.802 14:41:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.802 14:41:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.802 14:41:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:31.802 14:41:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:31.802 14:41:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:31.802 14:41:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:31.802 14:41:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:31.802 14:41:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.802 14:41:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:31.802 14:41:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:31.802 14:41:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:31.802 14:41:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:31.802 14:41:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:31.802 14:41:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:31.802 Cannot find device "nvmf_tgt_br" 00:22:31.802 14:41:40 -- nvmf/common.sh@155 -- # true 00:22:31.802 14:41:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.802 Cannot find device "nvmf_tgt_br2" 00:22:31.802 14:41:40 -- nvmf/common.sh@156 -- # true 00:22:31.802 14:41:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:31.802 14:41:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:31.802 Cannot find device "nvmf_tgt_br" 00:22:31.802 14:41:40 -- nvmf/common.sh@158 -- # true 00:22:31.802 14:41:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:32.060 Cannot find device "nvmf_tgt_br2" 00:22:32.060 14:41:40 -- nvmf/common.sh@159 -- # true 00:22:32.060 14:41:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:32.060 14:41:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:32.060 14:41:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.060 14:41:40 -- nvmf/common.sh@162 -- # true 00:22:32.060 14:41:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.060 14:41:40 -- nvmf/common.sh@163 -- # true 00:22:32.060 14:41:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:32.060 14:41:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:32.060 14:41:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:32.060 14:41:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:32.060 14:41:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:32.060 14:41:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:32.060 14:41:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:32.060 14:41:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:32.060 14:41:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:32.060 14:41:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:32.060 14:41:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:32.060 14:41:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:32.060 14:41:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:32.060 14:41:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:32.060 14:41:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:32.060 14:41:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:32.060 14:41:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:32.060 14:41:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:32.060 14:41:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:32.060 14:41:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:32.060 14:41:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:32.060 14:41:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:32.060 14:41:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:32.319 14:41:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:32.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:22:32.319 00:22:32.319 --- 10.0.0.2 ping statistics --- 00:22:32.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.319 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:32.319 14:41:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:32.319 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:32.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:22:32.319 00:22:32.319 --- 10.0.0.3 ping statistics --- 00:22:32.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.319 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:32.319 14:41:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:32.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:22:32.319 00:22:32.319 --- 10.0.0.1 ping statistics --- 00:22:32.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.319 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:32.319 14:41:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.319 14:41:40 -- nvmf/common.sh@422 -- # return 0 00:22:32.319 14:41:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:32.319 14:41:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.319 14:41:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:32.319 14:41:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:32.319 14:41:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.319 14:41:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:32.319 14:41:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:32.319 14:41:40 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:32.319 14:41:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:32.319 14:41:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:32.319 14:41:40 -- common/autotest_common.sh@10 -- # set +x 00:22:32.319 14:41:40 -- nvmf/common.sh@470 -- # nvmfpid=77762 00:22:32.319 14:41:40 -- nvmf/common.sh@471 -- # waitforlisten 77762 00:22:32.319 14:41:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:32.319 14:41:40 -- common/autotest_common.sh@817 -- # '[' -z 77762 ']' 00:22:32.319 14:41:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.319 14:41:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:32.319 14:41:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.319 14:41:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:32.319 14:41:40 -- common/autotest_common.sh@10 -- # set +x 00:22:32.319 [2024-04-17 14:41:40.756190] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:22:32.319 [2024-04-17 14:41:40.756283] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.319 [2024-04-17 14:41:40.893060] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:32.578 [2024-04-17 14:41:40.951006] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.578 [2024-04-17 14:41:40.951063] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.578 [2024-04-17 14:41:40.951075] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.578 [2024-04-17 14:41:40.951083] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.578 [2024-04-17 14:41:40.951091] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.578 [2024-04-17 14:41:40.951176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.578 [2024-04-17 14:41:40.951188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.145 14:41:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:33.145 14:41:41 -- common/autotest_common.sh@850 -- # return 0 00:22:33.145 14:41:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:33.145 14:41:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:33.145 14:41:41 -- common/autotest_common.sh@10 -- # set +x 00:22:33.145 14:41:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.145 14:41:41 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.145 14:41:41 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:33.403 [2024-04-17 14:41:41.955150] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.403 14:41:41 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:33.662 Malloc0 00:22:33.662 14:41:42 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.228 14:41:42 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.486 14:41:42 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.486 [2024-04-17 14:41:43.083022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.743 14:41:43 -- host/timeout.sh@32 -- # bdevperf_pid=77817 00:22:34.743 14:41:43 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:34.743 14:41:43 -- host/timeout.sh@34 -- # waitforlisten 77817 /var/tmp/bdevperf.sock 00:22:34.743 14:41:43 -- common/autotest_common.sh@817 -- # '[' -z 77817 ']' 00:22:34.743 14:41:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.743 14:41:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:34.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.743 14:41:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.743 14:41:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:34.743 14:41:43 -- common/autotest_common.sh@10 -- # set +x 00:22:34.743 [2024-04-17 14:41:43.166406] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:22:34.743 [2024-04-17 14:41:43.166550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77817 ] 00:22:34.743 [2024-04-17 14:41:43.307400] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.001 [2024-04-17 14:41:43.384853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.991 14:41:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:35.991 14:41:44 -- common/autotest_common.sh@850 -- # return 0 00:22:35.991 14:41:44 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:35.991 14:41:44 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:36.249 NVMe0n1 00:22:36.249 14:41:44 -- host/timeout.sh@51 -- # rpc_pid=77840 00:22:36.249 14:41:44 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:36.249 14:41:44 -- host/timeout.sh@53 -- # sleep 1 00:22:36.508 Running I/O for 10 seconds... 00:22:37.445 14:41:45 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.706 [2024-04-17 14:41:46.074532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de60 is same with the state(5) to be set 00:22:37.706 [2024-04-17 14:41:46.074778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.074822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.074858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.074875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.074889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.074903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.074922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.074939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.074980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.074999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.706 [2024-04-17 14:41:46.075381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.706 [2024-04-17 14:41:46.075397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.075724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.075751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.075782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.075809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.075846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.075883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.075909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.075941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.075975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.075993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.707 [2024-04-17 14:41:46.076493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.076527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.707 [2024-04-17 14:41:46.076547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.707 [2024-04-17 14:41:46.076563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.076938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.076972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.708 [2024-04-17 14:41:46.077438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.708 [2024-04-17 14:41:46.077474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.708 [2024-04-17 14:41:46.077509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.708 [2024-04-17 14:41:46.077544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.708 [2024-04-17 14:41:46.077581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.708 [2024-04-17 14:41:46.077615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.708 [2024-04-17 14:41:46.077642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.708 [2024-04-17 14:41:46.077670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.708 [2024-04-17 14:41:46.077721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.708 [2024-04-17 14:41:46.077747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.077770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.077788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.077808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.077825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.077853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.077865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.077879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.077891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.077908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.077923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.077937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.077966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.077986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.709 [2024-04-17 14:41:46.078303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.709 [2024-04-17 14:41:46.078341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.709 [2024-04-17 14:41:46.078378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.709 [2024-04-17 14:41:46.078412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.709 [2024-04-17 14:41:46.078447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.709 [2024-04-17 14:41:46.078482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.709 [2024-04-17 14:41:46.078517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.709 [2024-04-17 14:41:46.078555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.709 [2024-04-17 14:41:46.078964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.709 [2024-04-17 14:41:46.078987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.710 [2024-04-17 14:41:46.079392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0e8a0 is same with the state(5) to be set 00:22:37.710 [2024-04-17 14:41:46.079429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.710 [2024-04-17 14:41:46.079444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.710 [2024-04-17 14:41:46.079463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65848 len:8 PRP1 0x0 PRP2 0x0 00:22:37.710 [2024-04-17 14:41:46.079480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079543] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b0e8a0 was disconnected and freed. reset controller. 00:22:37.710 [2024-04-17 14:41:46.079698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.710 [2024-04-17 14:41:46.079730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.710 [2024-04-17 14:41:46.079767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.710 [2024-04-17 14:41:46.079797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.710 [2024-04-17 14:41:46.079830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.710 [2024-04-17 14:41:46.079845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6dc0 is same with the state(5) to be set 00:22:37.710 [2024-04-17 14:41:46.080131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.710 [2024-04-17 14:41:46.080188] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa6dc0 (9): Bad file descriptor 00:22:37.710 [2024-04-17 14:41:46.080336] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.710 [2024-04-17 14:41:46.080452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.710 [2024-04-17 14:41:46.080531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.710 [2024-04-17 14:41:46.080561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa6dc0 with addr=10.0.0.2, port=4420 00:22:37.710 [2024-04-17 14:41:46.080582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6dc0 is same with the state(5) to be set 00:22:37.710 [2024-04-17 14:41:46.080613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa6dc0 (9): Bad file descriptor 00:22:37.710 [2024-04-17 14:41:46.080641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:37.710 [2024-04-17 14:41:46.080659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:37.710 [2024-04-17 14:41:46.080676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:37.710 [2024-04-17 14:41:46.080709] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.710 [2024-04-17 14:41:46.080728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.710 14:41:46 -- host/timeout.sh@56 -- # sleep 2 00:22:39.616 [2024-04-17 14:41:48.080897] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.616 [2024-04-17 14:41:48.081054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.616 [2024-04-17 14:41:48.081126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.616 [2024-04-17 14:41:48.081154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa6dc0 with addr=10.0.0.2, port=4420 00:22:39.616 [2024-04-17 14:41:48.081174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6dc0 is same with the state(5) to be set 00:22:39.616 [2024-04-17 14:41:48.081219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa6dc0 (9): Bad file descriptor 00:22:39.616 [2024-04-17 14:41:48.081250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:39.616 [2024-04-17 14:41:48.081268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:39.616 [2024-04-17 14:41:48.081285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.616 [2024-04-17 14:41:48.081329] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.616 [2024-04-17 14:41:48.081353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.616 14:41:48 -- host/timeout.sh@57 -- # get_controller 00:22:39.616 14:41:48 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:39.616 14:41:48 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:39.875 14:41:48 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:39.875 14:41:48 -- host/timeout.sh@58 -- # get_bdev 00:22:39.875 14:41:48 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:39.875 14:41:48 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:40.133 14:41:48 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:40.133 14:41:48 -- host/timeout.sh@61 -- # sleep 5 00:22:41.534 [2024-04-17 14:41:50.081540] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.534 [2024-04-17 14:41:50.081655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.534 [2024-04-17 14:41:50.081701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.534 [2024-04-17 14:41:50.081718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa6dc0 with addr=10.0.0.2, port=4420 00:22:41.534 [2024-04-17 14:41:50.081732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa6dc0 is same with the state(5) to be set 00:22:41.534 [2024-04-17 14:41:50.081774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa6dc0 (9): Bad file descriptor 00:22:41.534 [2024-04-17 14:41:50.081795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:41.534 [2024-04-17 14:41:50.081805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:41.534 [2024-04-17 14:41:50.081816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:41.534 [2024-04-17 14:41:50.081846] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:41.534 [2024-04-17 14:41:50.081858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:44.065 [2024-04-17 14:41:52.081929] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:44.632 00:22:44.632 Latency(us) 00:22:44.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.632 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.632 Verification LBA range: start 0x0 length 0x4000 00:22:44.632 NVMe0n1 : 8.13 1001.84 3.91 15.75 0.00 125551.46 4110.89 7015926.69 00:22:44.632 =================================================================================================================== 00:22:44.632 Total : 1001.84 3.91 15.75 0.00 125551.46 4110.89 7015926.69 00:22:44.632 0 00:22:45.198 14:41:53 -- host/timeout.sh@62 -- # get_controller 00:22:45.198 14:41:53 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.198 14:41:53 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:45.457 14:41:54 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:45.457 14:41:54 -- host/timeout.sh@63 -- # get_bdev 00:22:45.457 14:41:54 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:45.457 14:41:54 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:46.023 14:41:54 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:46.023 14:41:54 -- host/timeout.sh@65 -- # wait 77840 00:22:46.023 14:41:54 -- host/timeout.sh@67 -- # killprocess 77817 00:22:46.024 14:41:54 -- common/autotest_common.sh@936 -- # '[' -z 77817 ']' 00:22:46.024 14:41:54 -- common/autotest_common.sh@940 -- # kill -0 77817 00:22:46.024 14:41:54 -- common/autotest_common.sh@941 -- # uname 00:22:46.024 14:41:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.024 14:41:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77817 00:22:46.024 14:41:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:46.024 killing process with pid 77817 00:22:46.024 14:41:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:46.024 14:41:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77817' 00:22:46.024 Received shutdown signal, test time was about 9.471589 seconds 00:22:46.024 00:22:46.024 Latency(us) 00:22:46.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.024 =================================================================================================================== 00:22:46.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.024 14:41:54 -- common/autotest_common.sh@955 -- # kill 77817 00:22:46.024 14:41:54 -- common/autotest_common.sh@960 -- # wait 77817 00:22:46.024 14:41:54 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.282 [2024-04-17 14:41:54.831469] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.282 14:41:54 -- host/timeout.sh@74 -- # bdevperf_pid=77962 00:22:46.282 14:41:54 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:46.282 14:41:54 -- host/timeout.sh@76 -- # waitforlisten 77962 /var/tmp/bdevperf.sock 00:22:46.282 14:41:54 -- common/autotest_common.sh@817 -- # '[' -z 77962 ']' 00:22:46.282 14:41:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.282 14:41:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:46.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.282 14:41:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.282 14:41:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:46.282 14:41:54 -- common/autotest_common.sh@10 -- # set +x 00:22:46.540 [2024-04-17 14:41:54.895679] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:22:46.540 [2024-04-17 14:41:54.895787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77962 ] 00:22:46.540 [2024-04-17 14:41:55.030262] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.540 [2024-04-17 14:41:55.089398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.474 14:41:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:47.474 14:41:55 -- common/autotest_common.sh@850 -- # return 0 00:22:47.474 14:41:55 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:47.731 14:41:56 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:47.989 NVMe0n1 00:22:47.989 14:41:56 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.989 14:41:56 -- host/timeout.sh@84 -- # rpc_pid=77990 00:22:47.989 14:41:56 -- host/timeout.sh@86 -- # sleep 1 00:22:47.989 Running I/O for 10 seconds... 00:22:48.921 14:41:57 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.182 [2024-04-17 14:41:57.741142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204030 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.182 [2024-04-17 14:41:57.741476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.182 [2024-04-17 14:41:57.741501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.182 [2024-04-17 14:41:57.741521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.182 [2024-04-17 14:41:57.741540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1427dc0 is same with the state(5) to be set 00:22:49.182 [2024-04-17 14:41:57.741616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.182 [2024-04-17 14:41:57.741632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.182 [2024-04-17 14:41:57.741664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.182 [2024-04-17 14:41:57.741686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.182 [2024-04-17 14:41:57.741708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.182 [2024-04-17 14:41:57.741730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.182 [2024-04-17 14:41:57.741752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.182 [2024-04-17 14:41:57.741763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.741773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.741784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.741794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.741830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.741844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.741856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.741865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.741876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.741886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.741900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.741910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.741923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.741942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.741978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.741996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.742545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.742566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.742587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.742608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.742628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.742650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.742672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.183 [2024-04-17 14:41:57.742693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.183 [2024-04-17 14:41:57.742842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.183 [2024-04-17 14:41:57.742854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.742864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.742876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.742886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.742897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.742907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.742918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.742928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.742940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.742963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.742979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.742996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.184 [2024-04-17 14:41:57.743671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.184 [2024-04-17 14:41:57.743870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.184 [2024-04-17 14:41:57.743882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.743892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.743903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.743913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.743924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.743934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.743945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.743968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.743980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.743990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.185 [2024-04-17 14:41:57.744333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.185 [2024-04-17 14:41:57.744489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.185 [2024-04-17 14:41:57.744546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.185 [2024-04-17 14:41:57.744556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61408 len:8 PRP1 0x0 PRP2 0x0 00:22:49.185 [2024-04-17 14:41:57.744565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.185 [2024-04-17 14:41:57.744615] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x148faf0 was disconnected and freed. reset controller. 00:22:49.185 [2024-04-17 14:41:57.744866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.185 [2024-04-17 14:41:57.744903] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427dc0 (9): Bad file descriptor 00:22:49.185 [2024-04-17 14:41:57.745050] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.185 [2024-04-17 14:41:57.745128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.185 [2024-04-17 14:41:57.745173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.185 [2024-04-17 14:41:57.745190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1427dc0 with addr=10.0.0.2, port=4420 00:22:49.185 [2024-04-17 14:41:57.745201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1427dc0 is same with the state(5) to be set 00:22:49.185 [2024-04-17 14:41:57.745222] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427dc0 (9): Bad file descriptor 00:22:49.185 [2024-04-17 14:41:57.745238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.185 [2024-04-17 14:41:57.745248] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:49.185 [2024-04-17 14:41:57.745259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.185 [2024-04-17 14:41:57.745280] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.185 [2024-04-17 14:41:57.745291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.185 14:41:57 -- host/timeout.sh@90 -- # sleep 1 00:22:50.560 [2024-04-17 14:41:58.745450] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.560 [2024-04-17 14:41:58.745563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.560 [2024-04-17 14:41:58.745611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.560 [2024-04-17 14:41:58.745628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1427dc0 with addr=10.0.0.2, port=4420 00:22:50.560 [2024-04-17 14:41:58.745642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1427dc0 is same with the state(5) to be set 00:22:50.560 [2024-04-17 14:41:58.745669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427dc0 (9): Bad file descriptor 00:22:50.560 [2024-04-17 14:41:58.745688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.560 [2024-04-17 14:41:58.745698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.560 [2024-04-17 14:41:58.745709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.560 [2024-04-17 14:41:58.745736] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.560 [2024-04-17 14:41:58.745748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.560 14:41:58 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.560 [2024-04-17 14:41:58.986062] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.560 14:41:59 -- host/timeout.sh@92 -- # wait 77990 00:22:51.500 [2024-04-17 14:41:59.761871] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.147 00:22:58.147 Latency(us) 00:22:58.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.147 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:58.147 Verification LBA range: start 0x0 length 0x4000 00:22:58.147 NVMe0n1 : 10.01 6036.70 23.58 0.00 0.00 21161.54 1117.09 3019898.88 00:22:58.147 =================================================================================================================== 00:22:58.147 Total : 6036.70 23.58 0.00 0.00 21161.54 1117.09 3019898.88 00:22:58.147 0 00:22:58.147 14:42:06 -- host/timeout.sh@97 -- # rpc_pid=78096 00:22:58.147 14:42:06 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.147 14:42:06 -- host/timeout.sh@98 -- # sleep 1 00:22:58.147 Running I/O for 10 seconds... 00:22:59.081 14:42:07 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.341 [2024-04-17 14:42:07.828126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.828991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1062b80 is same with the state(5) to be set 00:22:59.341 [2024-04-17 14:42:07.829181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-04-17 14:42:07.829213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-04-17 14:42:07.829235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-04-17 14:42:07.829247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-04-17 14:42:07.829260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-04-17 14:42:07.829270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-04-17 14:42:07.829281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-04-17 14:42:07.829291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-04-17 14:42:07.829302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.829984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.829996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-04-17 14:42:07.830672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-04-17 14:42:07.830683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.830980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.830990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-04-17 14:42:07.831641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-04-17 14:42:07.831865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.343 [2024-04-17 14:42:07.831874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-04-17 14:42:07.831886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.344 [2024-04-17 14:42:07.831896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-04-17 14:42:07.831907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.344 [2024-04-17 14:42:07.831917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-04-17 14:42:07.831928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.344 [2024-04-17 14:42:07.831938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-04-17 14:42:07.831959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.344 [2024-04-17 14:42:07.831971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-04-17 14:42:07.831982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-04-17 14:42:07.831992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-04-17 14:42:07.832004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14916e0 is same with the state(5) to be set 00:22:59.344 [2024-04-17 14:42:07.832017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.344 [2024-04-17 14:42:07.832025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.344 [2024-04-17 14:42:07.832034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:22:59.344 [2024-04-17 14:42:07.832043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-04-17 14:42:07.832085] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14916e0 was disconnected and freed. reset controller. 00:22:59.344 [2024-04-17 14:42:07.832317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.344 [2024-04-17 14:42:07.832391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427dc0 (9): Bad file descriptor 00:22:59.344 [2024-04-17 14:42:07.832508] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.344 [2024-04-17 14:42:07.832559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.344 [2024-04-17 14:42:07.832611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.344 [2024-04-17 14:42:07.832628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1427dc0 with addr=10.0.0.2, port=4420 00:22:59.344 [2024-04-17 14:42:07.832638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1427dc0 is same with the state(5) to be set 00:22:59.344 [2024-04-17 14:42:07.832656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427dc0 (9): Bad file descriptor 00:22:59.344 [2024-04-17 14:42:07.832672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.344 [2024-04-17 14:42:07.832681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:59.344 [2024-04-17 14:42:07.832692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.344 [2024-04-17 14:42:07.832712] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.344 [2024-04-17 14:42:07.832724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.344 14:42:07 -- host/timeout.sh@101 -- # sleep 3 00:23:00.277 [2024-04-17 14:42:08.832875] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.277 [2024-04-17 14:42:08.833005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.277 [2024-04-17 14:42:08.833054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.277 [2024-04-17 14:42:08.833071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1427dc0 with addr=10.0.0.2, port=4420 00:23:00.277 [2024-04-17 14:42:08.833086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1427dc0 is same with the state(5) to be set 00:23:00.277 [2024-04-17 14:42:08.833114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427dc0 (9): Bad file descriptor 00:23:00.277 [2024-04-17 14:42:08.833134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.277 [2024-04-17 14:42:08.833144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.277 [2024-04-17 14:42:08.833155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.277 [2024-04-17 14:42:08.833183] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.277 [2024-04-17 14:42:08.833195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:01.277 [2024-04-17 14:42:09.833361] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.277 [2024-04-17 14:42:09.833470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.277 [2024-04-17 14:42:09.833518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.277 [2024-04-17 14:42:09.833535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1427dc0 with addr=10.0.0.2, port=4420 00:23:01.277 [2024-04-17 14:42:09.833549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1427dc0 is same with the state(5) to be set 00:23:01.277 [2024-04-17 14:42:09.833577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427dc0 (9): Bad file descriptor 00:23:01.277 [2024-04-17 14:42:09.833596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:01.277 [2024-04-17 14:42:09.833606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:01.277 [2024-04-17 14:42:09.833616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:01.277 [2024-04-17 14:42:09.833644] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.277 [2024-04-17 14:42:09.833656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.650 [2024-04-17 14:42:10.837045] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.650 [2024-04-17 14:42:10.837155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.650 [2024-04-17 14:42:10.837201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.650 [2024-04-17 14:42:10.837218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1427dc0 with addr=10.0.0.2, port=4420 00:23:02.650 [2024-04-17 14:42:10.837232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1427dc0 is same with the state(5) to be set 00:23:02.650 [2024-04-17 14:42:10.837492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1427dc0 (9): Bad file descriptor 00:23:02.650 [2024-04-17 14:42:10.837743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:02.650 [2024-04-17 14:42:10.837757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:02.650 [2024-04-17 14:42:10.837769] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:02.650 [2024-04-17 14:42:10.841723] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.650 [2024-04-17 14:42:10.841757] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.650 14:42:10 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.650 [2024-04-17 14:42:11.101612] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.650 14:42:11 -- host/timeout.sh@103 -- # wait 78096 00:23:03.583 [2024-04-17 14:42:11.877104] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:08.848 00:23:08.849 Latency(us) 00:23:08.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.849 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.849 Verification LBA range: start 0x0 length 0x4000 00:23:08.849 NVMe0n1 : 10.01 5132.42 20.05 3572.54 0.00 14666.50 700.04 3019898.88 00:23:08.849 =================================================================================================================== 00:23:08.849 Total : 5132.42 20.05 3572.54 0.00 14666.50 0.00 3019898.88 00:23:08.849 0 00:23:08.849 14:42:16 -- host/timeout.sh@105 -- # killprocess 77962 00:23:08.849 14:42:16 -- common/autotest_common.sh@936 -- # '[' -z 77962 ']' 00:23:08.849 14:42:16 -- common/autotest_common.sh@940 -- # kill -0 77962 00:23:08.849 14:42:16 -- common/autotest_common.sh@941 -- # uname 00:23:08.849 14:42:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:08.849 14:42:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77962 00:23:08.849 killing process with pid 77962 00:23:08.849 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.849 00:23:08.849 Latency(us) 00:23:08.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.849 =================================================================================================================== 00:23:08.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.849 14:42:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:08.849 14:42:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:08.849 14:42:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77962' 00:23:08.849 14:42:16 -- common/autotest_common.sh@955 -- # kill 77962 00:23:08.849 14:42:16 -- common/autotest_common.sh@960 -- # wait 77962 00:23:08.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.849 14:42:16 -- host/timeout.sh@110 -- # bdevperf_pid=78210 00:23:08.849 14:42:16 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:08.849 14:42:16 -- host/timeout.sh@112 -- # waitforlisten 78210 /var/tmp/bdevperf.sock 00:23:08.849 14:42:16 -- common/autotest_common.sh@817 -- # '[' -z 78210 ']' 00:23:08.849 14:42:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.849 14:42:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:08.849 14:42:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.849 14:42:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:08.849 14:42:16 -- common/autotest_common.sh@10 -- # set +x 00:23:08.849 [2024-04-17 14:42:16.957289] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:23:08.849 [2024-04-17 14:42:16.957370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78210 ] 00:23:08.849 [2024-04-17 14:42:17.091807] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.849 [2024-04-17 14:42:17.149496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.849 14:42:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:08.849 14:42:17 -- common/autotest_common.sh@850 -- # return 0 00:23:08.849 14:42:17 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 78210 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:08.849 14:42:17 -- host/timeout.sh@116 -- # dtrace_pid=78213 00:23:08.849 14:42:17 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:09.109 14:42:17 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:09.368 NVMe0n1 00:23:09.368 14:42:17 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.368 14:42:17 -- host/timeout.sh@124 -- # rpc_pid=78260 00:23:09.368 14:42:17 -- host/timeout.sh@125 -- # sleep 1 00:23:09.368 Running I/O for 10 seconds... 00:23:10.303 14:42:18 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.564 [2024-04-17 14:42:19.045856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.045919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.045972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.045987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.564 [2024-04-17 14:42:19.046386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.564 [2024-04-17 14:42:19.046396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.046980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.046991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.565 [2024-04-17 14:42:19.047244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.565 [2024-04-17 14:42:19.047254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.047983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.047993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.048004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.048014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.048025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.048035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.048049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.048059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.048071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.048080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.048091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.048101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.566 [2024-04-17 14:42:19.048113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.566 [2024-04-17 14:42:19.048122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.567 [2024-04-17 14:42:19.048681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914fa0 is same with the state(5) to be set 00:23:10.567 [2024-04-17 14:42:19.048704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.567 [2024-04-17 14:42:19.048712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.567 [2024-04-17 14:42:19.048722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115176 len:8 PRP1 0x0 PRP2 0x0 00:23:10.567 [2024-04-17 14:42:19.048733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.567 [2024-04-17 14:42:19.048777] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1914fa0 was disconnected and freed. reset controller. 00:23:10.567 [2024-04-17 14:42:19.048859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.567 [2024-04-17 14:42:19.048883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.568 [2024-04-17 14:42:19.048895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.568 [2024-04-17 14:42:19.048905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.568 [2024-04-17 14:42:19.048915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.568 [2024-04-17 14:42:19.048925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.568 [2024-04-17 14:42:19.048935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.568 [2024-04-17 14:42:19.048944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.568 [2024-04-17 14:42:19.048967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d2030 is same with the state(5) to be set 00:23:10.568 [2024-04-17 14:42:19.049217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.568 [2024-04-17 14:42:19.049244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d2030 (9): Bad file descriptor 00:23:10.568 [2024-04-17 14:42:19.049350] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.568 [2024-04-17 14:42:19.049424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.568 [2024-04-17 14:42:19.049469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.568 [2024-04-17 14:42:19.049486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d2030 with addr=10.0.0.2, port=4420 00:23:10.568 [2024-04-17 14:42:19.049497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d2030 is same with the state(5) to be set 00:23:10.568 [2024-04-17 14:42:19.049517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d2030 (9): Bad file descriptor 00:23:10.568 [2024-04-17 14:42:19.049535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.568 [2024-04-17 14:42:19.049546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.568 [2024-04-17 14:42:19.049556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.568 [2024-04-17 14:42:19.049577] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.568 [2024-04-17 14:42:19.049589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.568 14:42:19 -- host/timeout.sh@128 -- # wait 78260 00:23:12.468 [2024-04-17 14:42:21.049816] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.468 [2024-04-17 14:42:21.049978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.468 [2024-04-17 14:42:21.050039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.468 [2024-04-17 14:42:21.050058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d2030 with addr=10.0.0.2, port=4420 00:23:12.468 [2024-04-17 14:42:21.050072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d2030 is same with the state(5) to be set 00:23:12.468 [2024-04-17 14:42:21.050102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d2030 (9): Bad file descriptor 00:23:12.468 [2024-04-17 14:42:21.050123] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.468 [2024-04-17 14:42:21.050135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.468 [2024-04-17 14:42:21.050146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.468 [2024-04-17 14:42:21.050175] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.468 [2024-04-17 14:42:21.050189] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.999 [2024-04-17 14:42:23.050372] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.999 [2024-04-17 14:42:23.050488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.999 [2024-04-17 14:42:23.050537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.999 [2024-04-17 14:42:23.050555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d2030 with addr=10.0.0.2, port=4420 00:23:14.999 [2024-04-17 14:42:23.050569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d2030 is same with the state(5) to be set 00:23:14.999 [2024-04-17 14:42:23.050599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d2030 (9): Bad file descriptor 00:23:14.999 [2024-04-17 14:42:23.050620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:14.999 [2024-04-17 14:42:23.050631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:14.999 [2024-04-17 14:42:23.050642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:14.999 [2024-04-17 14:42:23.050669] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.999 [2024-04-17 14:42:23.050682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:16.903 [2024-04-17 14:42:25.050767] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:17.470 00:23:17.470 Latency(us) 00:23:17.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.470 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:17.470 NVMe0n1 : 8.10 1913.00 7.47 15.80 0.00 66307.51 1586.27 7015926.69 00:23:17.470 =================================================================================================================== 00:23:17.470 Total : 1913.00 7.47 15.80 0.00 66307.51 1586.27 7015926.69 00:23:17.470 0 00:23:17.470 14:42:26 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:17.470 Attaching 5 probes... 00:23:17.470 1321.036370: reset bdev controller NVMe0 00:23:17.470 1321.112423: reconnect bdev controller NVMe0 00:23:17.470 3321.481721: reconnect delay bdev controller NVMe0 00:23:17.470 3321.510133: reconnect bdev controller NVMe0 00:23:17.470 5322.064288: reconnect delay bdev controller NVMe0 00:23:17.470 5322.087254: reconnect bdev controller NVMe0 00:23:17.470 7322.553249: reconnect delay bdev controller NVMe0 00:23:17.471 7322.576952: reconnect bdev controller NVMe0 00:23:17.730 14:42:26 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:17.730 14:42:26 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:17.730 14:42:26 -- host/timeout.sh@136 -- # kill 78213 00:23:17.730 14:42:26 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:17.730 14:42:26 -- host/timeout.sh@139 -- # killprocess 78210 00:23:17.730 14:42:26 -- common/autotest_common.sh@936 -- # '[' -z 78210 ']' 00:23:17.730 14:42:26 -- common/autotest_common.sh@940 -- # kill -0 78210 00:23:17.730 14:42:26 -- common/autotest_common.sh@941 -- # uname 00:23:17.730 14:42:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.730 14:42:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78210 00:23:17.730 killing process with pid 78210 00:23:17.730 Received shutdown signal, test time was about 8.159669 seconds 00:23:17.730 00:23:17.730 Latency(us) 00:23:17.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.730 =================================================================================================================== 00:23:17.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.730 14:42:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:17.730 14:42:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:17.730 14:42:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78210' 00:23:17.730 14:42:26 -- common/autotest_common.sh@955 -- # kill 78210 00:23:17.730 14:42:26 -- common/autotest_common.sh@960 -- # wait 78210 00:23:17.730 14:42:26 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.989 14:42:26 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:17.989 14:42:26 -- host/timeout.sh@145 -- # nvmftestfini 00:23:17.989 14:42:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:17.989 14:42:26 -- nvmf/common.sh@117 -- # sync 00:23:18.248 14:42:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.248 14:42:26 -- nvmf/common.sh@120 -- # set +e 00:23:18.248 14:42:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.248 14:42:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.248 rmmod nvme_tcp 00:23:18.248 rmmod nvme_fabrics 00:23:18.248 rmmod nvme_keyring 00:23:18.248 14:42:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.248 14:42:26 -- nvmf/common.sh@124 -- # set -e 00:23:18.248 14:42:26 -- nvmf/common.sh@125 -- # return 0 00:23:18.248 14:42:26 -- nvmf/common.sh@478 -- # '[' -n 77762 ']' 00:23:18.248 14:42:26 -- nvmf/common.sh@479 -- # killprocess 77762 00:23:18.248 14:42:26 -- common/autotest_common.sh@936 -- # '[' -z 77762 ']' 00:23:18.248 14:42:26 -- common/autotest_common.sh@940 -- # kill -0 77762 00:23:18.248 14:42:26 -- common/autotest_common.sh@941 -- # uname 00:23:18.248 14:42:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.248 14:42:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77762 00:23:18.248 killing process with pid 77762 00:23:18.248 14:42:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:18.248 14:42:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:18.248 14:42:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77762' 00:23:18.248 14:42:26 -- common/autotest_common.sh@955 -- # kill 77762 00:23:18.248 14:42:26 -- common/autotest_common.sh@960 -- # wait 77762 00:23:18.507 14:42:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:18.507 14:42:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:18.507 14:42:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:18.507 14:42:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.507 14:42:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.507 14:42:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.507 14:42:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.507 14:42:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.507 14:42:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:18.507 00:23:18.508 real 0m46.705s 00:23:18.508 user 2m17.584s 00:23:18.508 sys 0m5.485s 00:23:18.508 14:42:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:18.508 14:42:26 -- common/autotest_common.sh@10 -- # set +x 00:23:18.508 ************************************ 00:23:18.508 END TEST nvmf_timeout 00:23:18.508 ************************************ 00:23:18.508 14:42:26 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:23:18.508 14:42:26 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:23:18.508 14:42:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:18.508 14:42:26 -- common/autotest_common.sh@10 -- # set +x 00:23:18.508 14:42:27 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:23:18.508 ************************************ 00:23:18.508 END TEST nvmf_tcp 00:23:18.508 ************************************ 00:23:18.508 00:23:18.508 real 8m44.092s 00:23:18.508 user 20m46.675s 00:23:18.508 sys 2m20.652s 00:23:18.508 14:42:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:18.508 14:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:18.508 14:42:27 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:23:18.508 14:42:27 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:18.508 14:42:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:18.508 14:42:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:18.508 14:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:18.767 ************************************ 00:23:18.767 START TEST nvmf_dif 00:23:18.767 ************************************ 00:23:18.767 14:42:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:18.767 * Looking for test storage... 00:23:18.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:18.767 14:42:27 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:18.767 14:42:27 -- nvmf/common.sh@7 -- # uname -s 00:23:18.767 14:42:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.767 14:42:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.767 14:42:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.767 14:42:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.767 14:42:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.767 14:42:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.767 14:42:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.767 14:42:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.767 14:42:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.767 14:42:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.767 14:42:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c475d660-18c3-4238-bb35-f293e0cc1403 00:23:18.767 14:42:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=c475d660-18c3-4238-bb35-f293e0cc1403 00:23:18.767 14:42:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.767 14:42:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.767 14:42:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:18.767 14:42:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.767 14:42:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:18.767 14:42:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.767 14:42:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.767 14:42:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.767 14:42:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.767 14:42:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.767 14:42:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.767 14:42:27 -- paths/export.sh@5 -- # export PATH 00:23:18.767 14:42:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.767 14:42:27 -- nvmf/common.sh@47 -- # : 0 00:23:18.767 14:42:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.767 14:42:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.767 14:42:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.767 14:42:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.767 14:42:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.767 14:42:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.767 14:42:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.767 14:42:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.767 14:42:27 -- target/dif.sh@15 -- # NULL_META=16 00:23:18.767 14:42:27 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:18.767 14:42:27 -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:18.767 14:42:27 -- target/dif.sh@15 -- # NULL_DIF=1 00:23:18.767 14:42:27 -- target/dif.sh@135 -- # nvmftestinit 00:23:18.767 14:42:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:18.767 14:42:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.767 14:42:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:18.767 14:42:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:18.767 14:42:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:18.767 14:42:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.767 14:42:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:18.767 14:42:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.767 14:42:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:18.767 14:42:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:18.767 14:42:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:18.767 14:42:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:18.767 14:42:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:18.767 14:42:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:18.767 14:42:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.767 14:42:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.767 14:42:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:18.767 14:42:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:18.767 14:42:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:18.767 14:42:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:18.767 14:42:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:18.768 14:42:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.768 14:42:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:18.768 14:42:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:18.768 14:42:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:18.768 14:42:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:18.768 14:42:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:18.768 14:42:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:18.768 Cannot find device "nvmf_tgt_br" 00:23:18.768 14:42:27 -- nvmf/common.sh@155 -- # true 00:23:18.768 14:42:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:18.768 Cannot find device "nvmf_tgt_br2" 00:23:18.768 14:42:27 -- nvmf/common.sh@156 -- # true 00:23:18.768 14:42:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:18.768 14:42:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:18.768 Cannot find device "nvmf_tgt_br" 00:23:18.768 14:42:27 -- nvmf/common.sh@158 -- # true 00:23:18.768 14:42:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:18.768 Cannot find device "nvmf_tgt_br2" 00:23:18.768 14:42:27 -- nvmf/common.sh@159 -- # true 00:23:18.768 14:42:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:18.768 14:42:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:18.768 14:42:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:18.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.768 14:42:27 -- nvmf/common.sh@162 -- # true 00:23:18.768 14:42:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:18.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.027 14:42:27 -- nvmf/common.sh@163 -- # true 00:23:19.027 14:42:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:19.027 14:42:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:19.027 14:42:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:19.027 14:42:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:19.027 14:42:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:19.027 14:42:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:19.027 14:42:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:19.027 14:42:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:19.027 14:42:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:19.027 14:42:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:19.027 14:42:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:19.027 14:42:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:19.027 14:42:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:19.027 14:42:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:19.027 14:42:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:19.027 14:42:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:19.027 14:42:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:19.027 14:42:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:19.027 14:42:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:19.027 14:42:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:19.027 14:42:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:19.027 14:42:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:19.027 14:42:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:19.027 14:42:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:19.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:23:19.027 00:23:19.027 --- 10.0.0.2 ping statistics --- 00:23:19.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.027 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:19.027 14:42:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:19.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:19.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:19.027 00:23:19.027 --- 10.0.0.3 ping statistics --- 00:23:19.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.027 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:19.027 14:42:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:19.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:23:19.027 00:23:19.027 --- 10.0.0.1 ping statistics --- 00:23:19.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.027 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:23:19.027 14:42:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.027 14:42:27 -- nvmf/common.sh@422 -- # return 0 00:23:19.027 14:42:27 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:23:19.027 14:42:27 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:19.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:19.286 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:19.286 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:19.545 14:42:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.545 14:42:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:19.545 14:42:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:19.545 14:42:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.545 14:42:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:19.545 14:42:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:19.545 14:42:27 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:19.545 14:42:27 -- target/dif.sh@137 -- # nvmfappstart 00:23:19.545 14:42:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:19.545 14:42:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:19.545 14:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:19.545 14:42:27 -- nvmf/common.sh@470 -- # nvmfpid=78703 00:23:19.545 14:42:27 -- nvmf/common.sh@471 -- # waitforlisten 78703 00:23:19.545 14:42:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:19.545 14:42:27 -- common/autotest_common.sh@817 -- # '[' -z 78703 ']' 00:23:19.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.545 14:42:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.545 14:42:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:19.545 14:42:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.545 14:42:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:19.545 14:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:19.545 [2024-04-17 14:42:27.986704] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:23:19.545 [2024-04-17 14:42:27.987070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.545 [2024-04-17 14:42:28.129225] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.804 [2024-04-17 14:42:28.196082] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.804 [2024-04-17 14:42:28.196321] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.804 [2024-04-17 14:42:28.196414] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.804 [2024-04-17 14:42:28.196456] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.804 [2024-04-17 14:42:28.196466] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.804 [2024-04-17 14:42:28.196514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.371 14:42:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:20.371 14:42:28 -- common/autotest_common.sh@850 -- # return 0 00:23:20.371 14:42:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:20.371 14:42:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:20.371 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:23:20.630 14:42:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.630 14:42:28 -- target/dif.sh@139 -- # create_transport 00:23:20.630 14:42:28 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:20.630 14:42:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.630 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:23:20.630 [2024-04-17 14:42:28.980577] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.630 14:42:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.630 14:42:28 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:20.630 14:42:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:20.630 14:42:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.630 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:23:20.630 ************************************ 00:23:20.630 START TEST fio_dif_1_default 00:23:20.630 ************************************ 00:23:20.630 14:42:29 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:23:20.630 14:42:29 -- target/dif.sh@86 -- # create_subsystems 0 00:23:20.630 14:42:29 -- target/dif.sh@28 -- # local sub 00:23:20.630 14:42:29 -- target/dif.sh@30 -- # for sub in "$@" 00:23:20.630 14:42:29 -- target/dif.sh@31 -- # create_subsystem 0 00:23:20.630 14:42:29 -- target/dif.sh@18 -- # local sub_id=0 00:23:20.630 14:42:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:20.630 14:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.630 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:20.630 bdev_null0 00:23:20.630 14:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.630 14:42:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:20.630 14:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.630 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:20.630 14:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.630 14:42:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:20.630 14:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.630 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:20.630 14:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.630 14:42:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:20.630 14:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.630 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:20.630 [2024-04-17 14:42:29.088691] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.630 14:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.630 14:42:29 -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:20.630 14:42:29 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:20.630 14:42:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:20.630 14:42:29 -- nvmf/common.sh@521 -- # config=() 00:23:20.630 14:42:29 -- nvmf/common.sh@521 -- # local subsystem config 00:23:20.630 14:42:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:20.630 14:42:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:20.630 { 00:23:20.630 "params": { 00:23:20.630 "name": "Nvme$subsystem", 00:23:20.630 "trtype": "$TEST_TRANSPORT", 00:23:20.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.630 "adrfam": "ipv4", 00:23:20.630 "trsvcid": "$NVMF_PORT", 00:23:20.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.630 "hdgst": ${hdgst:-false}, 00:23:20.630 "ddgst": ${ddgst:-false} 00:23:20.630 }, 00:23:20.630 "method": "bdev_nvme_attach_controller" 00:23:20.630 } 00:23:20.630 EOF 00:23:20.630 )") 00:23:20.630 14:42:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.630 14:42:29 -- target/dif.sh@82 -- # gen_fio_conf 00:23:20.630 14:42:29 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.630 14:42:29 -- target/dif.sh@54 -- # local file 00:23:20.630 14:42:29 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:20.631 14:42:29 -- target/dif.sh@56 -- # cat 00:23:20.631 14:42:29 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:20.631 14:42:29 -- nvmf/common.sh@543 -- # cat 00:23:20.631 14:42:29 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:20.631 14:42:29 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.631 14:42:29 -- common/autotest_common.sh@1327 -- # shift 00:23:20.631 14:42:29 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:20.631 14:42:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.631 14:42:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:20.631 14:42:29 -- target/dif.sh@72 -- # (( file <= files )) 00:23:20.631 14:42:29 -- nvmf/common.sh@545 -- # jq . 00:23:20.631 14:42:29 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.631 14:42:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:20.631 14:42:29 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:20.631 14:42:29 -- nvmf/common.sh@546 -- # IFS=, 00:23:20.631 14:42:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:20.631 "params": { 00:23:20.631 "name": "Nvme0", 00:23:20.631 "trtype": "tcp", 00:23:20.631 "traddr": "10.0.0.2", 00:23:20.631 "adrfam": "ipv4", 00:23:20.631 "trsvcid": "4420", 00:23:20.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:20.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:20.631 "hdgst": false, 00:23:20.631 "ddgst": false 00:23:20.631 }, 00:23:20.631 "method": "bdev_nvme_attach_controller" 00:23:20.631 }' 00:23:20.631 14:42:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:20.631 14:42:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:20.631 14:42:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.631 14:42:29 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.631 14:42:29 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:20.631 14:42:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:20.631 14:42:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:20.631 14:42:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:20.631 14:42:29 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:20.631 14:42:29 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.889 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:20.889 fio-3.35 00:23:20.889 Starting 1 thread 00:23:21.148 [2024-04-17 14:42:29.623703] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:21.148 [2024-04-17 14:42:29.623774] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:33.354 00:23:33.354 filename0: (groupid=0, jobs=1): err= 0: pid=78774: Wed Apr 17 14:42:39 2024 00:23:33.354 read: IOPS=8390, BW=32.8MiB/s (34.4MB/s)(328MiB/10001msec) 00:23:33.354 slat (nsec): min=6668, max=53132, avg=8758.56, stdev=2456.53 00:23:33.354 clat (usec): min=376, max=5828, avg=450.73, stdev=44.68 00:23:33.354 lat (usec): min=383, max=5856, avg=459.49, stdev=45.06 00:23:33.354 clat percentiles (usec): 00:23:33.354 | 1.00th=[ 416], 5.00th=[ 420], 10.00th=[ 424], 20.00th=[ 433], 00:23:33.354 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 449], 60.00th=[ 453], 00:23:33.354 | 70.00th=[ 457], 80.00th=[ 465], 90.00th=[ 478], 95.00th=[ 490], 00:23:33.354 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 627], 99.95th=[ 652], 00:23:33.354 | 99.99th=[ 979] 00:23:33.354 bw ( KiB/s): min=32640, max=34048, per=100.00%, avg=33586.11, stdev=402.02, samples=19 00:23:33.354 iops : min= 8160, max= 8512, avg=8396.53, stdev=100.51, samples=19 00:23:33.354 lat (usec) : 500=97.28%, 750=2.71%, 1000=0.01% 00:23:33.354 lat (msec) : 2=0.01%, 10=0.01% 00:23:33.354 cpu : usr=84.17%, sys=13.91%, ctx=41, majf=0, minf=0 00:23:33.354 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:33.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.354 issued rwts: total=83916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.354 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:33.354 00:23:33.354 Run status group 0 (all jobs): 00:23:33.354 READ: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=328MiB (344MB), run=10001-10001msec 00:23:33.354 14:42:39 -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:33.354 14:42:39 -- target/dif.sh@43 -- # local sub 00:23:33.354 14:42:39 -- target/dif.sh@45 -- # for sub in "$@" 00:23:33.354 14:42:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:33.354 14:42:39 -- target/dif.sh@36 -- # local sub_id=0 00:23:33.354 14:42:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:33.354 14:42:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.354 14:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:33.354 14:42:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.354 14:42:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:33.354 14:42:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.354 14:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:33.354 14:42:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.354 00:23:33.354 real 0m10.879s 00:23:33.354 user 0m8.997s 00:23:33.354 sys 0m1.594s 00:23:33.354 14:42:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.354 ************************************ 00:23:33.354 END TEST fio_dif_1_default 00:23:33.354 ************************************ 00:23:33.354 14:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:33.354 14:42:39 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:33.354 14:42:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:33.354 14:42:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.354 14:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:33.354 ************************************ 00:23:33.354 START TEST fio_dif_1_multi_subsystems 00:23:33.354 ************************************ 00:23:33.354 14:42:40 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:23:33.354 14:42:40 -- target/dif.sh@92 -- # local files=1 00:23:33.354 14:42:40 -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:33.354 14:42:40 -- target/dif.sh@28 -- # local sub 00:23:33.354 14:42:40 -- target/dif.sh@30 -- # for sub in "$@" 00:23:33.355 14:42:40 -- target/dif.sh@31 -- # create_subsystem 0 00:23:33.355 14:42:40 -- target/dif.sh@18 -- # local sub_id=0 00:23:33.355 14:42:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:33.355 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.355 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:33.355 bdev_null0 00:23:33.355 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.355 14:42:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:33.355 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.355 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:33.355 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.355 14:42:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:33.355 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.355 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:33.355 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.355 14:42:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:33.355 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.355 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:33.355 [2024-04-17 14:42:40.096186] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.355 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.355 14:42:40 -- target/dif.sh@30 -- # for sub in "$@" 00:23:33.355 14:42:40 -- target/dif.sh@31 -- # create_subsystem 1 00:23:33.355 14:42:40 -- target/dif.sh@18 -- # local sub_id=1 00:23:33.355 14:42:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:33.355 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.355 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:33.355 bdev_null1 00:23:33.355 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.355 14:42:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:33.355 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.355 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:33.355 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.355 14:42:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:33.355 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.355 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:33.355 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.355 14:42:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.355 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.355 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:33.355 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.355 14:42:40 -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:33.355 14:42:40 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:33.355 14:42:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:33.355 14:42:40 -- nvmf/common.sh@521 -- # config=() 00:23:33.355 14:42:40 -- nvmf/common.sh@521 -- # local subsystem config 00:23:33.355 14:42:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:33.355 14:42:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:33.355 14:42:40 -- target/dif.sh@82 -- # gen_fio_conf 00:23:33.355 14:42:40 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:33.355 14:42:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:33.355 { 00:23:33.355 "params": { 00:23:33.355 "name": "Nvme$subsystem", 00:23:33.355 "trtype": "$TEST_TRANSPORT", 00:23:33.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.355 "adrfam": "ipv4", 00:23:33.355 "trsvcid": "$NVMF_PORT", 00:23:33.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.355 "hdgst": ${hdgst:-false}, 00:23:33.355 "ddgst": ${ddgst:-false} 00:23:33.355 }, 00:23:33.355 "method": "bdev_nvme_attach_controller" 00:23:33.355 } 00:23:33.355 EOF 00:23:33.355 )") 00:23:33.355 14:42:40 -- target/dif.sh@54 -- # local file 00:23:33.355 14:42:40 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:33.355 14:42:40 -- target/dif.sh@56 -- # cat 00:23:33.355 14:42:40 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:33.355 14:42:40 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:33.355 14:42:40 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:33.355 14:42:40 -- common/autotest_common.sh@1327 -- # shift 00:23:33.355 14:42:40 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:33.355 14:42:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.355 14:42:40 -- nvmf/common.sh@543 -- # cat 00:23:33.355 14:42:40 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:33.355 14:42:40 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:33.355 14:42:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:33.355 14:42:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:33.355 14:42:40 -- target/dif.sh@72 -- # (( file <= files )) 00:23:33.355 14:42:40 -- target/dif.sh@73 -- # cat 00:23:33.355 14:42:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:33.355 14:42:40 -- target/dif.sh@72 -- # (( file++ )) 00:23:33.355 14:42:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:33.355 { 00:23:33.355 "params": { 00:23:33.355 "name": "Nvme$subsystem", 00:23:33.355 "trtype": "$TEST_TRANSPORT", 00:23:33.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.355 "adrfam": "ipv4", 00:23:33.355 "trsvcid": "$NVMF_PORT", 00:23:33.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.355 "hdgst": ${hdgst:-false}, 00:23:33.355 "ddgst": ${ddgst:-false} 00:23:33.355 }, 00:23:33.355 "method": "bdev_nvme_attach_controller" 00:23:33.355 } 00:23:33.355 EOF 00:23:33.355 )") 00:23:33.355 14:42:40 -- target/dif.sh@72 -- # (( file <= files )) 00:23:33.355 14:42:40 -- nvmf/common.sh@543 -- # cat 00:23:33.355 14:42:40 -- nvmf/common.sh@545 -- # jq . 00:23:33.355 14:42:40 -- nvmf/common.sh@546 -- # IFS=, 00:23:33.355 14:42:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:33.355 "params": { 00:23:33.355 "name": "Nvme0", 00:23:33.355 "trtype": "tcp", 00:23:33.355 "traddr": "10.0.0.2", 00:23:33.355 "adrfam": "ipv4", 00:23:33.355 "trsvcid": "4420", 00:23:33.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:33.355 "hdgst": false, 00:23:33.355 "ddgst": false 00:23:33.355 }, 00:23:33.355 "method": "bdev_nvme_attach_controller" 00:23:33.355 },{ 00:23:33.355 "params": { 00:23:33.355 "name": "Nvme1", 00:23:33.355 "trtype": "tcp", 00:23:33.355 "traddr": "10.0.0.2", 00:23:33.355 "adrfam": "ipv4", 00:23:33.355 "trsvcid": "4420", 00:23:33.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.355 "hdgst": false, 00:23:33.355 "ddgst": false 00:23:33.355 }, 00:23:33.355 "method": "bdev_nvme_attach_controller" 00:23:33.355 }' 00:23:33.355 14:42:40 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:33.355 14:42:40 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:33.355 14:42:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.355 14:42:40 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:33.355 14:42:40 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:33.355 14:42:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:33.355 14:42:40 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:33.355 14:42:40 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:33.355 14:42:40 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:33.355 14:42:40 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:33.355 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:33.355 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:33.355 fio-3.35 00:23:33.355 Starting 2 threads 00:23:33.355 [2024-04-17 14:42:40.758992] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:33.355 [2024-04-17 14:42:40.759075] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:43.327 00:23:43.327 filename0: (groupid=0, jobs=1): err= 0: pid=78937: Wed Apr 17 14:42:50 2024 00:23:43.327 read: IOPS=4627, BW=18.1MiB/s (19.0MB/s)(181MiB/10001msec) 00:23:43.327 slat (nsec): min=7191, max=58753, avg=13737.81, stdev=3375.87 00:23:43.327 clat (usec): min=450, max=3090, avg=826.37, stdev=50.34 00:23:43.327 lat (usec): min=458, max=3113, avg=840.10, stdev=50.44 00:23:43.327 clat percentiles (usec): 00:23:43.327 | 1.00th=[ 766], 5.00th=[ 783], 10.00th=[ 791], 20.00th=[ 799], 00:23:43.327 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[ 824], 60.00th=[ 824], 00:23:43.327 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 865], 95.00th=[ 881], 00:23:43.327 | 99.00th=[ 1074], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1188], 00:23:43.327 | 99.99th=[ 1221] 00:23:43.327 bw ( KiB/s): min=17152, max=18784, per=50.01%, avg=18514.53, stdev=402.00, samples=19 00:23:43.327 iops : min= 4288, max= 4696, avg=4628.63, stdev=100.50, samples=19 00:23:43.327 lat (usec) : 500=0.01%, 750=0.31%, 1000=98.14% 00:23:43.327 lat (msec) : 2=1.53%, 4=0.01% 00:23:43.327 cpu : usr=89.77%, sys=8.83%, ctx=24, majf=0, minf=9 00:23:43.327 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.327 issued rwts: total=46284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.327 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:43.327 filename1: (groupid=0, jobs=1): err= 0: pid=78938: Wed Apr 17 14:42:50 2024 00:23:43.327 read: IOPS=4627, BW=18.1MiB/s (19.0MB/s)(181MiB/10001msec) 00:23:43.328 slat (nsec): min=5004, max=56035, avg=13404.31, stdev=3252.00 00:23:43.328 clat (usec): min=665, max=2477, avg=828.22, stdev=52.56 00:23:43.328 lat (usec): min=675, max=2499, avg=841.63, stdev=53.17 00:23:43.328 clat percentiles (usec): 00:23:43.328 | 1.00th=[ 725], 5.00th=[ 758], 10.00th=[ 783], 20.00th=[ 799], 00:23:43.328 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[ 824], 60.00th=[ 832], 00:23:43.328 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 873], 95.00th=[ 898], 00:23:43.328 | 99.00th=[ 1057], 99.50th=[ 1123], 99.90th=[ 1188], 99.95th=[ 1205], 00:23:43.328 | 99.99th=[ 1270] 00:23:43.328 bw ( KiB/s): min=17152, max=18784, per=50.01%, avg=18514.26, stdev=397.88, samples=19 00:23:43.328 iops : min= 4288, max= 4696, avg=4628.53, stdev=99.47, samples=19 00:23:43.328 lat (usec) : 750=3.72%, 1000=94.85% 00:23:43.328 lat (msec) : 2=1.42%, 4=0.01% 00:23:43.328 cpu : usr=90.24%, sys=8.40%, ctx=16, majf=0, minf=9 00:23:43.328 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.328 issued rwts: total=46284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.328 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:43.328 00:23:43.328 Run status group 0 (all jobs): 00:23:43.328 READ: bw=36.2MiB/s (37.9MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=362MiB (379MB), run=10001-10001msec 00:23:43.328 14:42:51 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:43.328 14:42:51 -- target/dif.sh@43 -- # local sub 00:23:43.328 14:42:51 -- target/dif.sh@45 -- # for sub in "$@" 00:23:43.328 14:42:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:43.328 14:42:51 -- target/dif.sh@36 -- # local sub_id=0 00:23:43.328 14:42:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:43.328 14:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 14:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.328 14:42:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:43.328 14:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 14:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.328 14:42:51 -- target/dif.sh@45 -- # for sub in "$@" 00:23:43.328 14:42:51 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:43.328 14:42:51 -- target/dif.sh@36 -- # local sub_id=1 00:23:43.328 14:42:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.328 14:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 14:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.328 14:42:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:43.328 14:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 14:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.328 00:23:43.328 real 0m11.025s 00:23:43.328 user 0m18.663s 00:23:43.328 sys 0m1.963s 00:23:43.328 14:42:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 ************************************ 00:23:43.328 END TEST fio_dif_1_multi_subsystems 00:23:43.328 ************************************ 00:23:43.328 14:42:51 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:43.328 14:42:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:43.328 14:42:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 ************************************ 00:23:43.328 START TEST fio_dif_rand_params 00:23:43.328 ************************************ 00:23:43.328 14:42:51 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:23:43.328 14:42:51 -- target/dif.sh@100 -- # local NULL_DIF 00:23:43.328 14:42:51 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:43.328 14:42:51 -- target/dif.sh@103 -- # NULL_DIF=3 00:23:43.328 14:42:51 -- target/dif.sh@103 -- # bs=128k 00:23:43.328 14:42:51 -- target/dif.sh@103 -- # numjobs=3 00:23:43.328 14:42:51 -- target/dif.sh@103 -- # iodepth=3 00:23:43.328 14:42:51 -- target/dif.sh@103 -- # runtime=5 00:23:43.328 14:42:51 -- target/dif.sh@105 -- # create_subsystems 0 00:23:43.328 14:42:51 -- target/dif.sh@28 -- # local sub 00:23:43.328 14:42:51 -- target/dif.sh@30 -- # for sub in "$@" 00:23:43.328 14:42:51 -- target/dif.sh@31 -- # create_subsystem 0 00:23:43.328 14:42:51 -- target/dif.sh@18 -- # local sub_id=0 00:23:43.328 14:42:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:43.328 14:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 bdev_null0 00:23:43.328 14:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.328 14:42:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:43.328 14:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 14:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.328 14:42:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:43.328 14:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 14:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.328 14:42:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:43.328 14:42:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.328 14:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.328 [2024-04-17 14:42:51.240357] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.328 14:42:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.328 14:42:51 -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:43.328 14:42:51 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:43.328 14:42:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:43.328 14:42:51 -- nvmf/common.sh@521 -- # config=() 00:23:43.328 14:42:51 -- nvmf/common.sh@521 -- # local subsystem config 00:23:43.328 14:42:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:43.328 14:42:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:43.328 { 00:23:43.328 "params": { 00:23:43.328 "name": "Nvme$subsystem", 00:23:43.328 "trtype": "$TEST_TRANSPORT", 00:23:43.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.328 "adrfam": "ipv4", 00:23:43.328 "trsvcid": "$NVMF_PORT", 00:23:43.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.328 "hdgst": ${hdgst:-false}, 00:23:43.328 "ddgst": ${ddgst:-false} 00:23:43.328 }, 00:23:43.328 "method": "bdev_nvme_attach_controller" 00:23:43.328 } 00:23:43.328 EOF 00:23:43.328 )") 00:23:43.328 14:42:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.328 14:42:51 -- target/dif.sh@82 -- # gen_fio_conf 00:23:43.328 14:42:51 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.328 14:42:51 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:43.328 14:42:51 -- target/dif.sh@54 -- # local file 00:23:43.328 14:42:51 -- target/dif.sh@56 -- # cat 00:23:43.328 14:42:51 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:43.328 14:42:51 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:43.328 14:42:51 -- nvmf/common.sh@543 -- # cat 00:23:43.328 14:42:51 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.328 14:42:51 -- common/autotest_common.sh@1327 -- # shift 00:23:43.328 14:42:51 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:43.328 14:42:51 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:43.328 14:42:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:43.328 14:42:51 -- target/dif.sh@72 -- # (( file <= files )) 00:23:43.328 14:42:51 -- nvmf/common.sh@545 -- # jq . 00:23:43.328 14:42:51 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.328 14:42:51 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:43.328 14:42:51 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:43.328 14:42:51 -- nvmf/common.sh@546 -- # IFS=, 00:23:43.328 14:42:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:43.328 "params": { 00:23:43.328 "name": "Nvme0", 00:23:43.328 "trtype": "tcp", 00:23:43.328 "traddr": "10.0.0.2", 00:23:43.328 "adrfam": "ipv4", 00:23:43.328 "trsvcid": "4420", 00:23:43.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:43.328 "hdgst": false, 00:23:43.328 "ddgst": false 00:23:43.328 }, 00:23:43.328 "method": "bdev_nvme_attach_controller" 00:23:43.328 }' 00:23:43.328 14:42:51 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:43.328 14:42:51 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:43.328 14:42:51 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:43.328 14:42:51 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:43.328 14:42:51 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.328 14:42:51 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:43.328 14:42:51 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:43.328 14:42:51 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:43.328 14:42:51 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:43.328 14:42:51 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.328 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:43.328 ... 00:23:43.328 fio-3.35 00:23:43.328 Starting 3 threads 00:23:43.328 [2024-04-17 14:42:51.784454] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:43.328 [2024-04-17 14:42:51.784534] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:48.598 00:23:48.598 filename0: (groupid=0, jobs=1): err= 0: pid=79099: Wed Apr 17 14:42:56 2024 00:23:48.598 read: IOPS=251, BW=31.4MiB/s (33.0MB/s)(158MiB/5008msec) 00:23:48.598 slat (nsec): min=7799, max=45033, avg=16948.93, stdev=5623.22 00:23:48.598 clat (usec): min=8273, max=13205, avg=11883.91, stdev=194.80 00:23:48.598 lat (usec): min=8287, max=13229, avg=11900.86, stdev=195.53 00:23:48.598 clat percentiles (usec): 00:23:48.598 | 1.00th=[11863], 5.00th=[11863], 10.00th=[11863], 20.00th=[11863], 00:23:48.598 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11863], 60.00th=[11863], 00:23:48.598 | 70.00th=[11863], 80.00th=[11994], 90.00th=[11994], 95.00th=[11994], 00:23:48.598 | 99.00th=[12125], 99.50th=[12125], 99.90th=[13173], 99.95th=[13173], 00:23:48.598 | 99.99th=[13173] 00:23:48.598 bw ( KiB/s): min=31488, max=32320, per=33.35%, avg=32185.60, stdev=245.94, samples=10 00:23:48.598 iops : min= 246, max= 252, avg=251.40, stdev= 1.90, samples=10 00:23:48.598 lat (msec) : 10=0.24%, 20=99.76% 00:23:48.598 cpu : usr=91.49%, sys=7.85%, ctx=93, majf=0, minf=9 00:23:48.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:48.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.598 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.598 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:48.598 filename0: (groupid=0, jobs=1): err= 0: pid=79100: Wed Apr 17 14:42:56 2024 00:23:48.598 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(157MiB/5002msec) 00:23:48.598 slat (nsec): min=7663, max=49790, avg=16596.36, stdev=6328.28 00:23:48.598 clat (usec): min=11720, max=15843, avg=11899.40, stdev=204.59 00:23:48.598 lat (usec): min=11730, max=15874, avg=11916.00, stdev=205.65 00:23:48.598 clat percentiles (usec): 00:23:48.598 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11863], 20.00th=[11863], 00:23:48.598 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11863], 60.00th=[11863], 00:23:48.598 | 70.00th=[11863], 80.00th=[11994], 90.00th=[11994], 95.00th=[11994], 00:23:48.598 | 99.00th=[12125], 99.50th=[12125], 99.90th=[15795], 99.95th=[15795], 00:23:48.598 | 99.99th=[15795] 00:23:48.598 bw ( KiB/s): min=31425, max=32256, per=33.32%, avg=32156.44, stdev=275.13, samples=9 00:23:48.598 iops : min= 245, max= 252, avg=251.11, stdev= 2.32, samples=9 00:23:48.598 lat (msec) : 20=100.00% 00:23:48.598 cpu : usr=91.40%, sys=7.96%, ctx=5, majf=0, minf=9 00:23:48.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:48.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.598 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.598 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:48.598 filename0: (groupid=0, jobs=1): err= 0: pid=79101: Wed Apr 17 14:42:56 2024 00:23:48.598 read: IOPS=251, BW=31.4MiB/s (33.0MB/s)(158MiB/5009msec) 00:23:48.598 slat (nsec): min=7797, max=45819, avg=16925.30, stdev=5575.73 00:23:48.598 clat (usec): min=8277, max=13675, avg=11885.37, stdev=204.87 00:23:48.598 lat (usec): min=8291, max=13704, avg=11902.30, stdev=205.73 00:23:48.598 clat percentiles (usec): 00:23:48.598 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11863], 20.00th=[11863], 00:23:48.598 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11863], 60.00th=[11863], 00:23:48.598 | 70.00th=[11863], 80.00th=[11994], 90.00th=[11994], 95.00th=[11994], 00:23:48.598 | 99.00th=[12125], 99.50th=[12125], 99.90th=[13698], 99.95th=[13698], 00:23:48.598 | 99.99th=[13698] 00:23:48.598 bw ( KiB/s): min=31425, max=32320, per=33.34%, avg=32179.30, stdev=265.80, samples=10 00:23:48.598 iops : min= 245, max= 252, avg=251.30, stdev= 2.21, samples=10 00:23:48.598 lat (msec) : 10=0.24%, 20=99.76% 00:23:48.598 cpu : usr=90.91%, sys=8.43%, ctx=32, majf=0, minf=9 00:23:48.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:48.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.599 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:48.599 00:23:48.599 Run status group 0 (all jobs): 00:23:48.599 READ: bw=94.3MiB/s (98.8MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-33.0MB/s), io=472MiB (495MB), run=5002-5009msec 00:23:48.599 14:42:57 -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:48.599 14:42:57 -- target/dif.sh@43 -- # local sub 00:23:48.599 14:42:57 -- target/dif.sh@45 -- # for sub in "$@" 00:23:48.599 14:42:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:48.599 14:42:57 -- target/dif.sh@36 -- # local sub_id=0 00:23:48.599 14:42:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@109 -- # NULL_DIF=2 00:23:48.599 14:42:57 -- target/dif.sh@109 -- # bs=4k 00:23:48.599 14:42:57 -- target/dif.sh@109 -- # numjobs=8 00:23:48.599 14:42:57 -- target/dif.sh@109 -- # iodepth=16 00:23:48.599 14:42:57 -- target/dif.sh@109 -- # runtime= 00:23:48.599 14:42:57 -- target/dif.sh@109 -- # files=2 00:23:48.599 14:42:57 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:48.599 14:42:57 -- target/dif.sh@28 -- # local sub 00:23:48.599 14:42:57 -- target/dif.sh@30 -- # for sub in "$@" 00:23:48.599 14:42:57 -- target/dif.sh@31 -- # create_subsystem 0 00:23:48.599 14:42:57 -- target/dif.sh@18 -- # local sub_id=0 00:23:48.599 14:42:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 bdev_null0 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 [2024-04-17 14:42:57.137937] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@30 -- # for sub in "$@" 00:23:48.599 14:42:57 -- target/dif.sh@31 -- # create_subsystem 1 00:23:48.599 14:42:57 -- target/dif.sh@18 -- # local sub_id=1 00:23:48.599 14:42:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 bdev_null1 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.599 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.599 14:42:57 -- target/dif.sh@30 -- # for sub in "$@" 00:23:48.599 14:42:57 -- target/dif.sh@31 -- # create_subsystem 2 00:23:48.599 14:42:57 -- target/dif.sh@18 -- # local sub_id=2 00:23:48.599 14:42:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:48.599 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.599 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.858 bdev_null2 00:23:48.858 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.858 14:42:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:48.859 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.859 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.859 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.859 14:42:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:48.859 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.859 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.859 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.859 14:42:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:48.859 14:42:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.859 14:42:57 -- common/autotest_common.sh@10 -- # set +x 00:23:48.859 14:42:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.859 14:42:57 -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:48.859 14:42:57 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:48.859 14:42:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:48.859 14:42:57 -- nvmf/common.sh@521 -- # config=() 00:23:48.859 14:42:57 -- nvmf/common.sh@521 -- # local subsystem config 00:23:48.859 14:42:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.859 14:42:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.859 { 00:23:48.859 "params": { 00:23:48.859 "name": "Nvme$subsystem", 00:23:48.859 "trtype": "$TEST_TRANSPORT", 00:23:48.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.859 "adrfam": "ipv4", 00:23:48.859 "trsvcid": "$NVMF_PORT", 00:23:48.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.859 "hdgst": ${hdgst:-false}, 00:23:48.859 "ddgst": ${ddgst:-false} 00:23:48.859 }, 00:23:48.859 "method": "bdev_nvme_attach_controller" 00:23:48.859 } 00:23:48.859 EOF 00:23:48.859 )") 00:23:48.859 14:42:57 -- target/dif.sh@82 -- # gen_fio_conf 00:23:48.859 14:42:57 -- target/dif.sh@54 -- # local file 00:23:48.859 14:42:57 -- target/dif.sh@56 -- # cat 00:23:48.859 14:42:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.859 14:42:57 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.859 14:42:57 -- nvmf/common.sh@543 -- # cat 00:23:48.859 14:42:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:48.859 14:42:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:48.859 14:42:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:48.859 14:42:57 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.859 14:42:57 -- common/autotest_common.sh@1327 -- # shift 00:23:48.859 14:42:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:48.859 14:42:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.859 14:42:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:48.859 14:42:57 -- target/dif.sh@72 -- # (( file <= files )) 00:23:48.859 14:42:57 -- target/dif.sh@73 -- # cat 00:23:48.859 14:42:57 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.859 14:42:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.859 14:42:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:48.859 14:42:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.859 { 00:23:48.859 "params": { 00:23:48.859 "name": "Nvme$subsystem", 00:23:48.859 "trtype": "$TEST_TRANSPORT", 00:23:48.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.859 "adrfam": "ipv4", 00:23:48.859 "trsvcid": "$NVMF_PORT", 00:23:48.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.859 "hdgst": ${hdgst:-false}, 00:23:48.859 "ddgst": ${ddgst:-false} 00:23:48.859 }, 00:23:48.859 "method": "bdev_nvme_attach_controller" 00:23:48.859 } 00:23:48.859 EOF 00:23:48.859 )") 00:23:48.859 14:42:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:48.859 14:42:57 -- nvmf/common.sh@543 -- # cat 00:23:48.859 14:42:57 -- target/dif.sh@72 -- # (( file++ )) 00:23:48.859 14:42:57 -- target/dif.sh@72 -- # (( file <= files )) 00:23:48.859 14:42:57 -- target/dif.sh@73 -- # cat 00:23:48.859 14:42:57 -- target/dif.sh@72 -- # (( file++ )) 00:23:48.859 14:42:57 -- target/dif.sh@72 -- # (( file <= files )) 00:23:48.859 14:42:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:48.859 14:42:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:48.859 { 00:23:48.859 "params": { 00:23:48.859 "name": "Nvme$subsystem", 00:23:48.859 "trtype": "$TEST_TRANSPORT", 00:23:48.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.859 "adrfam": "ipv4", 00:23:48.859 "trsvcid": "$NVMF_PORT", 00:23:48.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.859 "hdgst": ${hdgst:-false}, 00:23:48.859 "ddgst": ${ddgst:-false} 00:23:48.859 }, 00:23:48.859 "method": "bdev_nvme_attach_controller" 00:23:48.859 } 00:23:48.859 EOF 00:23:48.859 )") 00:23:48.859 14:42:57 -- nvmf/common.sh@543 -- # cat 00:23:48.859 14:42:57 -- nvmf/common.sh@545 -- # jq . 00:23:48.859 14:42:57 -- nvmf/common.sh@546 -- # IFS=, 00:23:48.859 14:42:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:48.859 "params": { 00:23:48.859 "name": "Nvme0", 00:23:48.859 "trtype": "tcp", 00:23:48.859 "traddr": "10.0.0.2", 00:23:48.859 "adrfam": "ipv4", 00:23:48.859 "trsvcid": "4420", 00:23:48.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:48.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:48.859 "hdgst": false, 00:23:48.859 "ddgst": false 00:23:48.859 }, 00:23:48.859 "method": "bdev_nvme_attach_controller" 00:23:48.859 },{ 00:23:48.859 "params": { 00:23:48.859 "name": "Nvme1", 00:23:48.859 "trtype": "tcp", 00:23:48.859 "traddr": "10.0.0.2", 00:23:48.859 "adrfam": "ipv4", 00:23:48.859 "trsvcid": "4420", 00:23:48.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.859 "hdgst": false, 00:23:48.859 "ddgst": false 00:23:48.859 }, 00:23:48.859 "method": "bdev_nvme_attach_controller" 00:23:48.859 },{ 00:23:48.859 "params": { 00:23:48.859 "name": "Nvme2", 00:23:48.859 "trtype": "tcp", 00:23:48.859 "traddr": "10.0.0.2", 00:23:48.859 "adrfam": "ipv4", 00:23:48.859 "trsvcid": "4420", 00:23:48.859 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:48.859 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:48.859 "hdgst": false, 00:23:48.859 "ddgst": false 00:23:48.859 }, 00:23:48.859 "method": "bdev_nvme_attach_controller" 00:23:48.859 }' 00:23:48.859 14:42:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:48.859 14:42:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:48.859 14:42:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.859 14:42:57 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.859 14:42:57 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:48.859 14:42:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:48.859 14:42:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:48.859 14:42:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:48.859 14:42:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:48.859 14:42:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.859 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:48.859 ... 00:23:48.859 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:48.859 ... 00:23:48.859 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:48.859 ... 00:23:48.859 fio-3.35 00:23:48.859 Starting 24 threads 00:23:49.427 [2024-04-17 14:42:57.880095] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:49.427 [2024-04-17 14:42:57.880160] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:04.315 fio: pid=79203, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.315 [2024-04-17 14:43:10.433364] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x27d09c0 via correct icresp 00:24:04.315 [2024-04-17 14:43:10.433431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x27d09c0 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=15663104, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=9408512, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=15581184, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=27107328, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=55336960, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=39804928, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=46354432, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=34320384, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=9441280, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=26243072, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=39927808, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=55951360, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=46231552, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=23080960, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=37081088, buflen=4096 00:24:04.315 fio: io_u error on file Nvme0n1: Input/output error: read offset=33689600, buflen=4096 00:24:04.315 fio: pid=79218, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.315 [2024-04-17 14:43:11.868495] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x35131e0 via correct icresp 00:24:04.315 [2024-04-17 14:43:11.869039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x35131e0 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=28827648, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=49283072, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=65605632, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=58163200, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=34717696, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=34066432, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=49029120, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=50032640, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=9048064, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=25878528, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=53391360, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=44511232, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=14729216, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=33435648, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=42016768, buflen=4096 00:24:04.315 [2024-04-17 14:43:11.876314] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2837860 via correct icresp 00:24:04.315 [2024-04-17 14:43:11.876354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2837860 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=44494848, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=7815168, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=19873792, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=6111232, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=16973824, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=49205248, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=31784960, buflen=4096 00:24:04.315 fio: io_u error on file Nvme2n1: Input/output error: read offset=7819264, buflen=4096 00:24:04.315 fio: pid=79224, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=59662336, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=34951168, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=1232896, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=54398976, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=61435904, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=61767680, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=40538112, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=51154944, buflen=4096 00:24:04.316 fio: pid=79209, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.316 [2024-04-17 14:43:11.917299] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x27d0b60 via correct icresp 00:24:04.316 [2024-04-17 14:43:11.917341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x27d0b60 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=53342208, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=39178240, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=23973888, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=38977536, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=12337152, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=4063232, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=63217664, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=3981312, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=36765696, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=58126336, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=41697280, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=16314368, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=13213696, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=12914688, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=30134272, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=38567936, buflen=4096 00:24:04.316 [2024-04-17 14:43:11.932607] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3512820 via correct icresp 00:24:04.316 [2024-04-17 14:43:11.932833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3512820 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=3481600, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=56184832, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=66342912, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=24215552, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=43143168, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=66883584, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=51359744, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=10039296, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=26849280, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=11550720, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=26439680, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=22265856, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=16035840, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=34263040, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=64954368, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=65003520, buflen=4096 00:24:04.316 fio: pid=79220, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.316 [2024-04-17 14:43:11.936293] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e76000 via correct icresp 00:24:04.316 [2024-04-17 14:43:11.936327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e76000 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=38592512, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=49111040, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=16941056, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=34516992, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=16191488, buflen=4096 00:24:04.316 fio: pid=79225, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=10518528, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=50724864, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=65597440, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=49045504, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=24788992, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=33144832, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=9060352, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=38088704, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=60559360, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=40730624, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=9224192, buflen=4096 00:24:04.316 [2024-04-17 14:43:11.949266] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3513ba0 via correct icresp 00:24:04.316 [2024-04-17 14:43:11.949300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3513ba0 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=64974848, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=51191808, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=12398592, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=61280256, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=48734208, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=11575296, buflen=4096 00:24:04.316 fio: pid=79222, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=52445184, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=60080128, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=44896256, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=53538816, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=62349312, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=37318656, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=2191360, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=40357888, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=4063232, buflen=4096 00:24:04.316 fio: io_u error on file Nvme2n1: Input/output error: read offset=26181632, buflen=4096 00:24:04.316 [2024-04-17 14:43:11.958285] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3513520 via correct icresp 00:24:04.316 [2024-04-17 14:43:11.958320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3513520 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=47452160, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=34287616, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=831488, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=53870592, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=37478400, buflen=4096 00:24:04.316 fio: pid=79211, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=2457600, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=40656896, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=11014144, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=48861184, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=17141760, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=55959552, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=27402240, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=38703104, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=35835904, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=1691648, buflen=4096 00:24:04.316 fio: io_u error on file Nvme1n1: Input/output error: read offset=61239296, buflen=4096 00:24:04.316 [2024-04-17 14:43:11.980403] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3513860 via correct icresp 00:24:04.316 [2024-04-17 14:43:11.980418] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e776c0 via correct icresp 00:24:04.316 [2024-04-17 14:43:11.980443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3513860 00:24:04.316 [2024-04-17 14:43:11.980465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e776c0 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=65564672, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=58109952, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=44072960, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=34648064, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=40460288, buflen=4096 00:24:04.316 fio: pid=79202, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=65613824, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=966656, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=15175680, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=21958656, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=56102912, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=45244416, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=37945344, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=63819776, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=25673728, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=864256, buflen=4096 00:24:04.316 fio: io_u error on file Nvme0n1: Input/output error: read offset=50180096, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=23298048, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=37404672, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=53178368, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=782336, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=39247872, buflen=4096 00:24:04.317 fio: pid=79205, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=13889536, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=13529088, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=66187264, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=42729472, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=21630976, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=59961344, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=8798208, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=32940032, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=12214272, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=58417152, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=7098368, buflen=4096 00:24:04.317 [2024-04-17 14:43:11.983331] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e76680 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.983461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e76680 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=19333120, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=35749888, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=24522752, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=45363200, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=46178304, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=12845056, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=48640000, buflen=4096 00:24:04.317 fio: pid=79219, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=48943104, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=19562496, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=10452992, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=7839744, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=52715520, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=51306496, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=22798336, buflen=4096 00:24:04.317 fio: io_u error on file Nvme2n1: Input/output error: read offset=41910272, buflen=4096 00:24:04.317 [2024-04-17 14:43:11.985601] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e76d00 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.985649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e76d00 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=26419200, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=56233984, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=7028736, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=57393152, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=66609152, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=49180672, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=20025344, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=44130304, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=23453696, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=34766848, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=36364288, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=32235520, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=32780288, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=54685696, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=13651968, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=10285056, buflen=4096 00:24:04.317 fio: pid=79208, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.317 [2024-04-17 14:43:11.987441] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e769c0 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.987453] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e77a00 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.987483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e769c0 00:24:04.317 [2024-04-17 14:43:11.987503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e77a00 00:24:04.317 [2024-04-17 14:43:11.987506] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e77d40 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.987541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e77d40 00:24:04.317 [2024-04-17 14:43:11.987759] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4d2e1a0 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.987788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4d2e1a0 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=51261440, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=23396352, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=28934144, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=47759360, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=5263360, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=64847872, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=737280, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=57282560, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=21434368, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=10530816, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=2691072, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=15687680, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=36368384, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=43036672, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=49655808, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=52785152, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=23162880, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=65122304, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=3477504, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=5808128, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=32575488, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=36683776, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=37040128, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=48943104, buflen=4096 00:24:04.317 fio: pid=79206, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=40386560, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=36069376, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=32911360, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=9801728, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=24539136, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=50425856, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=10510336, buflen=4096 00:24:04.317 fio: io_u error on file Nvme1n1: Input/output error: read offset=27250688, buflen=4096 00:24:04.317 fio: pid=79216, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.317 fio: pid=79207, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=36966400, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=55111680, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=38477824, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=23859200, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=58073088, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=59396096, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=56569856, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=61960192, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=41091072, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=42139648, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=4927488, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=66584576, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=53096448, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=13066240, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=66953216, buflen=4096 00:24:04.317 fio: io_u error on file Nvme0n1: Input/output error: read offset=55177216, buflen=4096 00:24:04.317 [2024-04-17 14:43:11.988528] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e77040 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.988565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e77040 00:24:04.317 [2024-04-17 14:43:11.988695] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4d2eea0 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.988710] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x3e77380 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.988700] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4d2e820 via correct icresp 00:24:04.317 [2024-04-17 14:43:11.988738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4d2eea0 00:24:04.317 [2024-04-17 14:43:11.988759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x3e77380 00:24:04.317 [2024-04-17 14:43:11.988821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4d2e820 00:24:04.318 fio: pid=79212, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.318 fio: pid=79215, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.318 fio: pid=79213, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.318 fio: pid=79221, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.318 fio: pid=79217, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.318 fio: pid=79204, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.318 fio: pid=79210, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:04.318 [2024-04-17 14:43:11.988933] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4d2e4e0 via correct icresp 00:24:04.318 [2024-04-17 14:43:11.988979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4d2e4e0 00:24:04.318 [2024-04-17 14:43:11.989277] nvme_tcp.c:2402:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4d2eb60 via correct icresp 00:24:04.318 [2024-04-17 14:43:11.989325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4d2eb60 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=33353728, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=62767104, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=9048064, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=34938880, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=31834112, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=52871168, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=46837760, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=10993664, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=1417216, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=45432832, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=51412992, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=63549440, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=26009600, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=61190144, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=32485376, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=5083136, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=2039808, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=37330944, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=58757120, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=49201152, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=32235520, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=66932736, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=6856704, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=49119232, buflen=4096 00:24:04.318 [2024-04-17 14:43:11.989482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27d0000 (9): Bad file descriptor 00:24:04.318 [2024-04-17 14:43:11.989534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27bd860 (9): Bad file descriptor 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=21385216, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=24121344, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=65490944, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=54857728, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=65241088, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=26177536, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=14835712, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=1720320, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=40833024, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=2764800, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=59269120, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=65015808, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=8192, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=37806080, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=15462400, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=17604608, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=2277376, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=33816576, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=15503360, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=18022400, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=35905536, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=64307200, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=35549184, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=39657472, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=52142080, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=34988032, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=55037952, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=55644160, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=54173696, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=22335488, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=62726144, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=58773504, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=59994112, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=62394368, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=12816384, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=39387136, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=14749696, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=32546816, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=36941824, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=29216768, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=47931392, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=16596992, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=20307968, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=40996864, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=45797376, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=45506560, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=6017024, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=21307392, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=58040320, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=46010368, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=42749952, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=35491840, buflen=4096 00:24:04.318 fio: io_u error on file Nvme2n1: Input/output error: read offset=22528000, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=52510720, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=57552896, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=36761600, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=17387520, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=37646336, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=42897408, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=28008448, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=42741760, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=2555904, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=34897920, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=50384896, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=24530944, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=45527040, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=31428608, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=38137856, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=48013312, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=19353600, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=22044672, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=60690432, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=5431296, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=40865792, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=32223232, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=54571008, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=2121728, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=12918784, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=28672, buflen=4096 00:24:04.318 fio: io_u error on file Nvme0n1: Input/output error: read offset=18472960, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=28303360, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=54108160, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=45944832, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=36769792, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=19599360, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=9601024, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=6950912, buflen=4096 00:24:04.318 fio: io_u error on file Nvme1n1: Input/output error: read offset=58298368, buflen=4096 00:24:04.318 [2024-04-17 14:43:11.992451] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27bd6c0 (9): Bad file descriptor 00:24:04.318 00:24:04.319 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79202: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79203: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79204: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=9, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79205: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79206: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=6, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79207: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79208: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79209: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79210: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79211: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79212: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79213: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename1: (groupid=0, jobs=1): err= 0: pid=79214: Wed Apr 17 14:43:12 2024 00:24:04.319 read: IOPS=1786, BW=7144KiB/s (7316kB/s)(69.8MiB/10005msec) 00:24:04.319 slat (usec): min=4, max=9044, avg=16.73, stdev=180.01 00:24:04.319 clat (usec): min=638, max=33485, avg=8809.34, stdev=3559.03 00:24:04.319 lat (usec): min=649, max=33497, avg=8826.06, stdev=3564.01 00:24:04.319 clat percentiles (usec): 00:24:04.319 | 1.00th=[ 1844], 5.00th=[ 2507], 10.00th=[ 4555], 20.00th=[ 6915], 00:24:04.319 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8717], 00:24:04.319 | 70.00th=[10159], 80.00th=[11863], 90.00th=[13173], 95.00th=[15008], 00:24:04.319 | 99.00th=[20841], 99.50th=[23200], 99.90th=[26608], 99.95th=[26870], 00:24:04.319 | 99.99th=[33424] 00:24:04.319 bw ( KiB/s): min= 5088, max= 9088, per=51.55%, avg=7141.60, stdev=1068.04, samples=20 00:24:04.319 iops : min= 1272, max= 2272, avg=1785.40, stdev=267.01, samples=20 00:24:04.319 lat (usec) : 750=0.02%, 1000=0.16% 00:24:04.319 lat (msec) : 2=3.20%, 4=5.44%, 10=60.81%, 20=29.32%, 50=1.06% 00:24:04.319 cpu : usr=40.55%, sys=4.05%, ctx=1400, majf=0, minf=9 00:24:04.319 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=17870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79215: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79216: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=3, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79217: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=5, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79218: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.319 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.319 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79219: Wed Apr 17 14:43:12 2024 00:24:04.319 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.319 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.319 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.320 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79220: Wed Apr 17 14:43:12 2024 00:24:04.320 cpu : usr=0.00%, sys=0.00%, ctx=16, majf=0, minf=0 00:24:04.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.320 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79221: Wed Apr 17 14:43:12 2024 00:24:04.320 cpu : usr=0.00%, sys=0.00%, ctx=4, majf=0, minf=0 00:24:04.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.320 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79222: Wed Apr 17 14:43:12 2024 00:24:04.320 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.320 filename2: (groupid=0, jobs=1): err= 0: pid=79223: Wed Apr 17 14:43:12 2024 00:24:04.320 read: IOPS=1677, BW=6711KiB/s (6872kB/s)(65.6MiB/10009msec) 00:24:04.320 slat (usec): min=4, max=9022, avg=17.40, stdev=215.44 00:24:04.320 clat (usec): min=440, max=35624, avg=9393.69, stdev=4037.38 00:24:04.320 lat (usec): min=450, max=35634, avg=9411.09, stdev=4039.15 00:24:04.320 clat percentiles (usec): 00:24:04.320 | 1.00th=[ 1778], 5.00th=[ 1975], 10.00th=[ 2900], 20.00th=[ 6587], 00:24:04.320 | 30.00th=[ 7308], 40.00th=[ 8979], 50.00th=[10814], 60.00th=[11731], 00:24:04.320 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12125], 95.00th=[13173], 00:24:04.320 | 99.00th=[22938], 99.50th=[23725], 99.90th=[23987], 99.95th=[23987], 00:24:04.320 | 99.99th=[35390] 00:24:04.320 bw ( KiB/s): min= 4592, max=11047, per=48.79%, avg=6759.11, stdev=1456.00, samples=19 00:24:04.320 iops : min= 1148, max= 2761, avg=1689.74, stdev=363.88, samples=19 00:24:04.320 lat (usec) : 500=0.04%, 750=0.04%, 1000=0.17% 00:24:04.320 lat (msec) : 2=5.22%, 4=10.42%, 10=30.40%, 20=51.76%, 50=1.97% 00:24:04.320 cpu : usr=34.19%, sys=3.48%, ctx=1252, majf=0, minf=0 00:24:04.320 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.6%, 16=8.6%, 32=0.0%, >=64=0.0% 00:24:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 issued rwts: total=16792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.320 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79224: Wed Apr 17 14:43:12 2024 00:24:04.320 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.320 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=79225: Wed Apr 17 14:43:12 2024 00:24:04.320 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:04.320 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.320 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.320 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:04.320 00:24:04.320 Run status group 0 (all jobs): 00:24:04.320 READ: bw=13.5MiB/s (14.2MB/s), 6711KiB/s-7144KiB/s (6872kB/s-7316kB/s), io=135MiB (142MB), run=10005-10009msec 00:24:04.320 14:43:12 -- common/autotest_common.sh@1338 -- # trap - ERR 00:24:04.320 14:43:12 -- common/autotest_common.sh@1338 -- # print_backtrace 00:24:04.320 14:43:12 -- common/autotest_common.sh@1139 -- # [[ ehxBET =~ e ]] 00:24:04.320 14:43:12 -- common/autotest_common.sh@1141 -- # args=('/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' '/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/dev/fd/62' 'fio_dif_rand_params' 'fio_dif_rand_params' '--iso' '--transport=tcp') 00:24:04.320 14:43:12 -- common/autotest_common.sh@1141 -- # local args 00:24:04.320 14:43:12 -- common/autotest_common.sh@1143 -- # xtrace_disable 00:24:04.320 14:43:12 -- common/autotest_common.sh@10 -- # set +x 00:24:04.320 ========== Backtrace start: ========== 00:24:04.320 00:24:04.320 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1338 -> fio_plugin(["/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev"],["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:24:04.320 ... 00:24:04.320 1333 break 00:24:04.320 1334 fi 00:24:04.320 1335 done 00:24:04.320 1336 00:24:04.320 1337 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:24:04.320 1338 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:24:04.320 1339 } 00:24:04.320 1340 00:24:04.320 1341 function fio_bdev() { 00:24:04.320 1342 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:24:04.320 1343 } 00:24:04.320 ... 00:24:04.320 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1342 -> fio_bdev(["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:24:04.320 ... 00:24:04.320 1337 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:24:04.320 1338 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:24:04.320 1339 } 00:24:04.320 1340 00:24:04.320 1341 function fio_bdev() { 00:24:04.320 1342 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:24:04.320 1343 } 00:24:04.320 1344 00:24:04.320 1345 function fio_nvme() { 00:24:04.320 1346 fio_plugin "$rootdir/build/fio/spdk_nvme" "$@" 00:24:04.320 1347 } 00:24:04.320 ... 00:24:04.320 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:82 -> fio(["/dev/fd/62"]) 00:24:04.320 ... 00:24:04.320 77 FIO 00:24:04.320 78 done 00:24:04.320 79 } 00:24:04.320 80 00:24:04.320 81 fio() { 00:24:04.320 => 82 fio_bdev --ioengine=spdk_bdev --spdk_json_conf "$@" <(gen_fio_conf) 00:24:04.320 83 } 00:24:04.320 84 00:24:04.320 85 fio_dif_1() { 00:24:04.320 86 create_subsystems 0 00:24:04.320 87 fio <(create_json_sub_conf 0) 00:24:04.320 ... 00:24:04.320 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:112 -> fio_dif_rand_params([]) 00:24:04.320 ... 00:24:04.320 107 destroy_subsystems 0 00:24:04.320 108 00:24:04.320 109 NULL_DIF=2 bs=4k numjobs=8 iodepth=16 runtime="" files=2 00:24:04.320 110 00:24:04.320 111 create_subsystems 0 1 2 00:24:04.320 => 112 fio <(create_json_sub_conf 0 1 2) 00:24:04.320 113 destroy_subsystems 0 1 2 00:24:04.320 114 00:24:04.320 115 NULL_DIF=1 bs=8k,16k,128k numjobs=2 iodepth=8 runtime=5 files=1 00:24:04.320 116 00:24:04.320 117 create_subsystems 0 1 00:24:04.320 ... 00:24:04.320 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1111 -> run_test(["fio_dif_rand_params"],["fio_dif_rand_params"]) 00:24:04.320 ... 00:24:04.320 1106 timing_enter $test_name 00:24:04.320 1107 echo "************************************" 00:24:04.320 1108 echo "START TEST $test_name" 00:24:04.320 1109 echo "************************************" 00:24:04.320 1110 xtrace_restore 00:24:04.320 1111 time "$@" 00:24:04.320 1112 xtrace_disable 00:24:04.320 1113 echo "************************************" 00:24:04.320 1114 echo "END TEST $test_name" 00:24:04.320 1115 echo "************************************" 00:24:04.320 1116 timing_exit $test_name 00:24:04.320 ... 00:24:04.320 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:143 -> main(["--transport=tcp"],["--iso"]) 00:24:04.320 ... 00:24:04.320 138 00:24:04.320 139 create_transport 00:24:04.320 140 00:24:04.320 141 run_test "fio_dif_1_default" fio_dif_1 00:24:04.320 142 run_test "fio_dif_1_multi_subsystems" fio_dif_1_multi_subsystems 00:24:04.320 => 143 run_test "fio_dif_rand_params" fio_dif_rand_params 00:24:04.321 144 run_test "fio_dif_digest" fio_dif_digest 00:24:04.321 145 00:24:04.321 146 trap - SIGINT SIGTERM EXIT 00:24:04.321 147 nvmftestfini 00:24:04.321 ... 00:24:04.321 00:24:04.321 ========== Backtrace end ========== 00:24:04.321 14:43:12 -- common/autotest_common.sh@1180 -- # return 0 00:24:04.321 00:24:04.321 real 0m21.031s 00:24:04.321 user 2m25.206s 00:24:04.321 sys 0m2.513s 00:24:04.321 14:43:12 -- common/autotest_common.sh@1 -- # process_shm --id 0 00:24:04.321 14:43:12 -- common/autotest_common.sh@794 -- # type=--id 00:24:04.321 14:43:12 -- common/autotest_common.sh@795 -- # id=0 00:24:04.321 14:43:12 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:24:04.321 14:43:12 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:04.321 14:43:12 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:24:04.321 14:43:12 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:24:04.321 14:43:12 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:24:04.321 14:43:12 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:04.321 nvmf_trace.0 00:24:04.321 14:43:12 -- common/autotest_common.sh@809 -- # return 0 00:24:04.321 14:43:12 -- common/autotest_common.sh@1 -- # nvmftestfini 00:24:04.321 14:43:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:04.321 14:43:12 -- nvmf/common.sh@117 -- # sync 00:24:04.321 14:43:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.321 14:43:12 -- nvmf/common.sh@120 -- # set +e 00:24:04.321 14:43:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.321 14:43:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.321 rmmod nvme_tcp 00:24:04.321 rmmod nvme_fabrics 00:24:04.321 rmmod nvme_keyring 00:24:04.321 14:43:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.321 14:43:12 -- nvmf/common.sh@124 -- # set -e 00:24:04.321 14:43:12 -- nvmf/common.sh@125 -- # return 0 00:24:04.321 14:43:12 -- nvmf/common.sh@478 -- # '[' -n 78703 ']' 00:24:04.321 14:43:12 -- nvmf/common.sh@479 -- # killprocess 78703 00:24:04.321 14:43:12 -- common/autotest_common.sh@936 -- # '[' -z 78703 ']' 00:24:04.321 14:43:12 -- common/autotest_common.sh@940 -- # kill -0 78703 00:24:04.321 14:43:12 -- common/autotest_common.sh@941 -- # uname 00:24:04.321 14:43:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:04.321 14:43:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78703 00:24:04.321 killing process with pid 78703 00:24:04.321 14:43:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:04.321 14:43:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:04.321 14:43:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78703' 00:24:04.321 14:43:12 -- common/autotest_common.sh@955 -- # kill 78703 00:24:04.321 14:43:12 -- common/autotest_common.sh@960 -- # wait 78703 00:24:04.321 14:43:12 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:24:04.321 14:43:12 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:04.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:04.579 Waiting for block devices as requested 00:24:04.579 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.579 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.579 14:43:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:04.579 14:43:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:04.579 14:43:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.579 14:43:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.579 14:43:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.579 14:43:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:04.579 14:43:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.579 14:43:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:04.579 14:43:13 -- common/autotest_common.sh@1111 -- # trap - ERR 00:24:04.579 14:43:13 -- common/autotest_common.sh@1111 -- # print_backtrace 00:24:04.579 14:43:13 -- common/autotest_common.sh@1139 -- # [[ ehxBET =~ e ]] 00:24:04.580 14:43:13 -- common/autotest_common.sh@1141 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh' 'nvmf_dif' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:24:04.580 14:43:13 -- common/autotest_common.sh@1141 -- # local args 00:24:04.580 14:43:13 -- common/autotest_common.sh@1143 -- # xtrace_disable 00:24:04.580 14:43:13 -- common/autotest_common.sh@10 -- # set +x 00:24:04.580 ========== Backtrace start: ========== 00:24:04.580 00:24:04.838 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1111 -> run_test(["nvmf_dif"],["/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh"]) 00:24:04.838 ... 00:24:04.838 1106 timing_enter $test_name 00:24:04.838 1107 echo "************************************" 00:24:04.838 1108 echo "START TEST $test_name" 00:24:04.838 1109 echo "************************************" 00:24:04.838 1110 xtrace_restore 00:24:04.838 1111 time "$@" 00:24:04.838 1112 xtrace_disable 00:24:04.838 1113 echo "************************************" 00:24:04.838 1114 echo "END TEST $test_name" 00:24:04.838 1115 echo "************************************" 00:24:04.838 1116 timing_exit $test_name 00:24:04.838 ... 00:24:04.838 in /home/vagrant/spdk_repo/spdk/autotest.sh:289 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:24:04.838 ... 00:24:04.838 284 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:04.838 285 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:24:04.838 286 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:04.838 287 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:04.838 288 fi 00:24:04.838 => 289 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:24:04.838 290 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:24:04.838 291 # The keyring tests utilize NVMe/TLS 00:24:04.838 292 run_test "keyring_file" "$rootdir/test/keyring/file.sh" 00:24:04.838 293 if [[ "$CONFIG_HAVE_KEYUTILS" == y ]]; then 00:24:04.838 294 run_test "keyring_linux" "$rootdir/test/keyring/linux.sh" 00:24:04.838 ... 00:24:04.838 00:24:04.838 ========== Backtrace end ========== 00:24:04.838 14:43:13 -- common/autotest_common.sh@1180 -- # return 0 00:24:04.838 00:24:04.838 real 0m46.066s 00:24:04.838 user 3m26.451s 00:24:04.838 sys 0m10.385s 00:24:04.838 14:43:13 -- common/autotest_common.sh@1 -- # autotest_cleanup 00:24:04.838 14:43:13 -- common/autotest_common.sh@1378 -- # local autotest_es=22 00:24:04.838 14:43:13 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:24:04.838 14:43:13 -- common/autotest_common.sh@10 -- # set +x 00:24:17.061 INFO: APP EXITING 00:24:17.061 INFO: killing all VMs 00:24:17.061 INFO: killing vhost app 00:24:17.061 INFO: EXIT DONE 00:24:17.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.061 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:17.061 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:17.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.887 Cleaning 00:24:17.887 Removing: /var/run/dpdk/spdk0/config 00:24:17.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:17.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:17.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:17.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:17.887 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:17.887 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:17.887 Removing: /var/run/dpdk/spdk1/config 00:24:17.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:17.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:17.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:17.887 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:17.887 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:17.887 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:17.887 Removing: /var/run/dpdk/spdk2/config 00:24:17.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:17.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:17.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:17.887 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:17.887 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:17.887 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:17.887 Removing: /var/run/dpdk/spdk3/config 00:24:17.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:17.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:17.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:17.887 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:17.887 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:17.887 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:17.887 Removing: /var/run/dpdk/spdk4/config 00:24:17.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:17.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:17.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:17.887 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:17.887 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:17.887 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:17.887 Removing: /dev/shm/nvmf_trace.0 00:24:17.887 Removing: /dev/shm/spdk_tgt_trace.pid58440 00:24:17.887 Removing: /var/run/dpdk/spdk0 00:24:17.887 Removing: /var/run/dpdk/spdk1 00:24:17.887 Removing: /var/run/dpdk/spdk2 00:24:17.887 Removing: /var/run/dpdk/spdk3 00:24:17.887 Removing: /var/run/dpdk/spdk4 00:24:17.887 Removing: /var/run/dpdk/spdk_pid58277 00:24:17.887 Removing: /var/run/dpdk/spdk_pid58440 00:24:17.887 Removing: /var/run/dpdk/spdk_pid58694 00:24:17.887 Removing: /var/run/dpdk/spdk_pid58890 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59041 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59116 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59192 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59282 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59363 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59405 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59439 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59511 00:24:17.887 Removing: /var/run/dpdk/spdk_pid59610 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60058 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60114 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60169 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60173 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60244 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60260 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60330 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60334 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60384 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60402 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60451 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60469 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60602 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60641 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60716 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60781 00:24:17.887 Removing: /var/run/dpdk/spdk_pid60811 00:24:18.146 Removing: /var/run/dpdk/spdk_pid60881 00:24:18.146 Removing: /var/run/dpdk/spdk_pid60920 00:24:18.146 Removing: /var/run/dpdk/spdk_pid60958 00:24:18.146 Removing: /var/run/dpdk/spdk_pid60998 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61037 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61076 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61109 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61153 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61187 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61231 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61264 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61308 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61341 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61384 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61420 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61459 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61497 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61539 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61581 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61620 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61659 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61729 00:24:18.146 Removing: /var/run/dpdk/spdk_pid61831 00:24:18.146 Removing: /var/run/dpdk/spdk_pid62155 00:24:18.146 Removing: /var/run/dpdk/spdk_pid62171 00:24:18.146 Removing: /var/run/dpdk/spdk_pid62210 00:24:18.146 Removing: /var/run/dpdk/spdk_pid62225 00:24:18.146 Removing: /var/run/dpdk/spdk_pid62246 00:24:18.146 Removing: /var/run/dpdk/spdk_pid62265 00:24:18.146 Removing: /var/run/dpdk/spdk_pid62273 00:24:18.146 Removing: /var/run/dpdk/spdk_pid62294 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62313 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62332 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62342 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62361 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62380 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62396 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62420 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62428 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62449 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62468 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62476 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62497 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62533 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62546 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62580 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62649 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62681 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62691 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62723 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62733 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62740 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62787 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62800 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62833 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62842 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62852 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62861 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62871 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62880 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62890 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62898 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62934 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62964 00:24:18.147 Removing: /var/run/dpdk/spdk_pid62974 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63007 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63016 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63029 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63068 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63085 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63111 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63124 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63126 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63139 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63141 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63154 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63162 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63169 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63252 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63294 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63420 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63462 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63505 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63525 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63541 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63556 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63593 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63607 00:24:18.147 Removing: /var/run/dpdk/spdk_pid63683 00:24:18.406 Removing: /var/run/dpdk/spdk_pid63699 00:24:18.406 Removing: /var/run/dpdk/spdk_pid63754 00:24:18.406 Removing: /var/run/dpdk/spdk_pid63842 00:24:18.406 Removing: /var/run/dpdk/spdk_pid63903 00:24:18.406 Removing: /var/run/dpdk/spdk_pid63933 00:24:18.406 Removing: /var/run/dpdk/spdk_pid64027 00:24:18.406 Removing: /var/run/dpdk/spdk_pid64080 00:24:18.406 Removing: /var/run/dpdk/spdk_pid64116 00:24:18.406 Removing: /var/run/dpdk/spdk_pid64370 00:24:18.406 Removing: /var/run/dpdk/spdk_pid64490 00:24:18.406 Removing: /var/run/dpdk/spdk_pid64517 00:24:18.406 Removing: /var/run/dpdk/spdk_pid64857 00:24:18.406 Removing: /var/run/dpdk/spdk_pid64895 00:24:18.406 Removing: /var/run/dpdk/spdk_pid65210 00:24:18.406 Removing: /var/run/dpdk/spdk_pid65629 00:24:18.406 Removing: /var/run/dpdk/spdk_pid65914 00:24:18.406 Removing: /var/run/dpdk/spdk_pid66678 00:24:18.406 Removing: /var/run/dpdk/spdk_pid67504 00:24:18.406 Removing: /var/run/dpdk/spdk_pid67621 00:24:18.406 Removing: /var/run/dpdk/spdk_pid67687 00:24:18.406 Removing: /var/run/dpdk/spdk_pid68949 00:24:18.406 Removing: /var/run/dpdk/spdk_pid69163 00:24:18.406 Removing: /var/run/dpdk/spdk_pid69471 00:24:18.406 Removing: /var/run/dpdk/spdk_pid69580 00:24:18.406 Removing: /var/run/dpdk/spdk_pid69719 00:24:18.406 Removing: /var/run/dpdk/spdk_pid69739 00:24:18.406 Removing: /var/run/dpdk/spdk_pid69761 00:24:18.406 Removing: /var/run/dpdk/spdk_pid69794 00:24:18.406 Removing: /var/run/dpdk/spdk_pid69885 00:24:18.406 Removing: /var/run/dpdk/spdk_pid70015 00:24:18.406 Removing: /var/run/dpdk/spdk_pid70150 00:24:18.406 Removing: /var/run/dpdk/spdk_pid70235 00:24:18.406 Removing: /var/run/dpdk/spdk_pid70426 00:24:18.406 Removing: /var/run/dpdk/spdk_pid70508 00:24:18.406 Removing: /var/run/dpdk/spdk_pid70601 00:24:18.406 Removing: /var/run/dpdk/spdk_pid70907 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71282 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71284 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71566 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71586 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71605 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71636 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71641 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71917 00:24:18.406 Removing: /var/run/dpdk/spdk_pid71960 00:24:18.406 Removing: /var/run/dpdk/spdk_pid72245 00:24:18.406 Removing: /var/run/dpdk/spdk_pid72434 00:24:18.406 Removing: /var/run/dpdk/spdk_pid72825 00:24:18.406 Removing: /var/run/dpdk/spdk_pid73319 00:24:18.406 Removing: /var/run/dpdk/spdk_pid73905 00:24:18.406 Removing: /var/run/dpdk/spdk_pid73907 00:24:18.406 Removing: /var/run/dpdk/spdk_pid75860 00:24:18.406 Removing: /var/run/dpdk/spdk_pid75920 00:24:18.406 Removing: /var/run/dpdk/spdk_pid75980 00:24:18.406 Removing: /var/run/dpdk/spdk_pid76034 00:24:18.406 Removing: /var/run/dpdk/spdk_pid76159 00:24:18.406 Removing: /var/run/dpdk/spdk_pid76220 00:24:18.406 Removing: /var/run/dpdk/spdk_pid76267 00:24:18.406 Removing: /var/run/dpdk/spdk_pid76320 00:24:18.406 Removing: /var/run/dpdk/spdk_pid76632 00:24:18.406 Removing: /var/run/dpdk/spdk_pid77817 00:24:18.406 Removing: /var/run/dpdk/spdk_pid77962 00:24:18.406 Removing: /var/run/dpdk/spdk_pid78210 00:24:18.406 Removing: /var/run/dpdk/spdk_pid78765 00:24:18.406 Removing: /var/run/dpdk/spdk_pid78933 00:24:18.406 Removing: /var/run/dpdk/spdk_pid79093 00:24:18.406 Removing: /var/run/dpdk/spdk_pid79186 00:24:18.406 Clean 00:24:24.968 14:43:33 -- common/autotest_common.sh@1437 -- # return 22 00:24:24.968 14:43:33 -- common/autotest_common.sh@1 -- # : 00:24:24.968 14:43:33 -- common/autotest_common.sh@1 -- # exit 1 00:24:24.981 [Pipeline] } 00:24:25.001 [Pipeline] // timeout 00:24:25.008 [Pipeline] } 00:24:25.031 [Pipeline] // stage 00:24:25.039 [Pipeline] } 00:24:25.044 ERROR: script returned exit code 1 00:24:25.062 [Pipeline] // catchError 00:24:25.071 [Pipeline] stage 00:24:25.074 [Pipeline] { (Stop VM) 00:24:25.088 [Pipeline] sh 00:24:25.371 + vagrant halt 00:24:29.560 ==> default: Halting domain... 00:24:34.855 [Pipeline] sh 00:24:35.131 + vagrant destroy -f 00:24:39.319 ==> default: Removing domain... 00:24:39.329 [Pipeline] sh 00:24:39.605 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:24:39.613 [Pipeline] } 00:24:39.632 [Pipeline] // stage 00:24:39.638 [Pipeline] } 00:24:39.655 [Pipeline] // dir 00:24:39.661 [Pipeline] } 00:24:39.679 [Pipeline] // wrap 00:24:39.686 [Pipeline] } 00:24:39.703 [Pipeline] // catchError 00:24:39.717 [Pipeline] stage 00:24:39.720 [Pipeline] { (Epilogue) 00:24:39.734 [Pipeline] sh 00:24:40.013 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:41.968 [Pipeline] catchError 00:24:41.970 [Pipeline] { 00:24:41.983 [Pipeline] sh 00:24:42.263 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:42.521 Artifacts sizes are good 00:24:42.531 [Pipeline] } 00:24:42.548 [Pipeline] // catchError 00:24:42.559 [Pipeline] archiveArtifacts 00:24:42.566 Archiving artifacts 00:24:42.776 [Pipeline] cleanWs 00:24:42.787 [WS-CLEANUP] Deleting project workspace... 00:24:42.787 [WS-CLEANUP] Deferred wipeout is used... 00:24:42.793 [WS-CLEANUP] done 00:24:42.795 [Pipeline] } 00:24:42.813 [Pipeline] // stage 00:24:42.818 [Pipeline] } 00:24:42.835 [Pipeline] // node 00:24:42.841 [Pipeline] End of Pipeline 00:24:42.896 Finished: FAILURE